On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote:
[...]

The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).

Why? Don't you realize that the contexts matters and [...]

Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder.

...

Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization". If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time.

Maybe the language itself should be designed so there are no ambiguities at all? A single simple for each function? A new keyboard design should be implemented(ultimately a direct brain to editor interface for the fastest time, excluding the time for development and learning)?

So, in this case I have to go with the practical of saying that it may be theoretically slower, but it is such an insignificant cost that it is an over optimization. I think you would agree, at least in this case. Again, the exact syntax is not import to me. If you really think it matters that much to you and it does(you are not tricking yourself), then use a different keyword.

When I see something I try to see it at once rather than reading it left to right. It is how music is read properly, for example. One can't read left to right and process the notes in real time fast enough. You must "see at once" a large chunk.

When I see foo(A in B)() I see it at once, not in parts or sub-symbols(subconsciously that may be what happens, but it either is so quick or my brain has learned to see differently that I do not feel it to be any slower).

that is, I do not read it like f, o, o (, A,  , i,...

but just like how one sees an image. Sure, there are clustering such as foo and (...), and I do sub-parse those at some point, but the context is derived very quickly. Now, of course, I do make assumptions to be able to do that. Obviously I have to sorta assume I'm reading D code and that the expression is a templated function, etc. But that is required regardless.

It's like seeing a picture of an ocean. You can see the global characteristics immediately without getting bogged down in the details until you need it. You can determine the approximate time of day(morning, noon, evening, night) relatively instantaneously without even knowing much else.

To really counter your argument: What about parenthesis? They too have the same problem with in. They have perceived ambiguity... but they are not ambiguity. So your argument should be said about them too and you should be against them also, but are you? [To be clear here: foo()() and (3+4) have 3 different use cases of ()'s... The first is templated arguments, the second is function arguments, and the third is expression grouping]

If you are, then you are being logical and consistent, If you are not, then you are not being logical nor consistent. If you fall in the latter case, I suggest you re-evaluate the way you think about such things because you are picking and choosing.

Now, if you are just stating a mathematical fast that it takes longer, then I can't really deny that, although I can't technically prove it either as you can't because we would require knowing exactly how the brain processes the information.








[...]

Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.

Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.

As I countered that in the above, I don't think your rebuttal is valid.

Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;) Again, you don't actually know how the brain processes information(no one does, it is all educated guesses). You use the concept that the more information one has to process the more time it takes... which seems logical, but it is not necessarily applicable directly to the interpretation of written symbols. Think of an image. We can process a ton of information nearly instantly, and if the logic applied, we would expect images to take much longer to "read" than the written word, yet it is exactly the opposite... and yet, symbols are just images(with a specific order we must follow to make sense of them).

Have you ever thought of a programming language that was based on images? Maybe that would be a much quicker way and much faster to "read" the source? Of course, some might claim that all life is is source code and "real life" is just the most natural representation of code.


I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other.

It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.

My feeling is though you are actually just making principles based on whim rather than a true logical basis, I could be wrong. Depending on how you answer my questions above will let me know better.

To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis?



[...]


If that is the case then go for it ;) It is not a concern of mine. You tell me the syntax and I will use it. (I'd have no choice, of course, but if it's short and sweet then I won't have any problem).

I'm discussing this as a matter of theory, I don't have a use for it.

Ok, I do, which is what led me to the problem, as all my "enhancements" do. I try something I think is an "elegant" way to simplify complexity in my program(from the user of the code's perspective, which will generally be me)... I run in to a wall, I post a message, and I usually get shot down immediately with "It can't be done"... then I have to find a way to do it. I find the way[usually using string mixins, thank god for them]. Post it... someone else then usually comes along with a better or simpler way. Usually when I say something like "This should be in the compiler", I immediately get shot down again with "It adds complexity to the compiler". In which case I try to to explain that everything adds complexity and this solution would add very little complexity since one can already do it in the library in a simple way... Usually the library solution is not robust and hence not good(I only worked it out enough for my use cases). ...and so the wheel goes around and around. But the logic is usually the same. "we can't do that".... which I eventually just interpret as "we don't wanna do that because we have better things to do", which is fine if at least that was admitted in the first place instead of wasting my time trying to explain that it can be done, coming up with a solution, etc. (of course, it's ultimately my fault since I am the one in control of my time, I mainly do it because it could help others in the same position that I was in)




[...]

Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of the day, as it's not an insignificant change to the language someone will to do the work and write a proposal.


My main issues with going through the trouble is that basically I have more important things to do. If I were going to try to get D to do all the changes I actually wanted, I'd be better off writing my own language the way I envision it and want it... but I don't have 10+ years to invest in such a beast and to do it right would require my full attention, which I'm not willing to give, because again, I have better things to do(things I really enjoy).

So, all I can do is hopefully stoke the fire enough to get someone else interested in the feature and have them do the work. If they don't, then they don't, that is fine. But I feel like I've done something to try to right a wrong.

That could happen, though historically speaking, usually things have gotten included in D only when the major proponent of something like this does the hard work (otherwise they seem to just fizzle out).

Yes. Because things take time and we only have so much. I am fine with that. I'm fine with a great idea going no where because no one has the time to invest in it. It's unfortunate but life is life... it's only when people ultimately are trying to deceive that or are just truly ignorant when I start to have a problem with them.



[...]

AFAIK the difference between syntax sugar and enabling syntax in PLs usually comes down to the former allowing you to express concepts already representable by other constructs in the PL; when encountered, the syntax sugar could be lowered by the compiler to the more verbose syntax and still be both valid in the PL and recognizable as the concept (while this is vague, a prominent example would be lambdas in Java 8).

Yes, but everything is "lowered" it's just how you define it.

Yes and w.r.t to my initial point, I did define it as "within the PL itself, preserving the concept".



[...]

Why do you think that? Less than ten people have participated in this thread so far.

I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]

Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.

Lol, you should have plenty of proof. Just look around. Just look at your own experiences in your life. I don't know much about you but I imagine that you have all the proof you need. Look how businesses are ran. Look how people "solve" problems. Look at the state of the world. You can make claims that it's this and that, as I can... but there is a common denominator among it all.

Also just think about how humans are able to judge things. Surely they can only judge it based on what they know? How can we judge things based on what we don't know? Seems impossible, right? Take someone you know that makes constantly makes bad decisions... why? Are they geniuses or idiots? I think it's pretty provable that the more intelligent a person is the better they are able to make decisions about something... and this is general. A programmer is surely able to make better decisions about coding than a non-programmer? Look at all the business people in the world who know absolutely nothing about technological factors but make such decisions about them on a daily basis... and the ramifications of those decisions are easily seen.

I'm not saying it's a simple problem, but there are relatively simple overarching rules involved. The more a person knows about life the more they can make better decisions about life. (but the life thing is the complex part, I don't disagree)

To make this tie in to what we are talking about: If someone never used templated functions in D, how can they make decisions on whether templated functions are useful or not? Should be obvious. The complexity comes in with they actually have used them... but then we have to know "How much do they use them", "How do they use them", "What other things do they know about that influence there usage of them", etc?

Most people are satisfies with just stopping at some arbitrary point when they get tired and have to go to bed... I'm not one of those people(for better or worse).



[...]

Why do you assume that? I've not seen anyone here claiming template parameter specialization to one of n types (which is the idea I replied to) couldn't be done in theory, only that it can't be done right now (the only claim as to that it can't be done I noticed was w.r.t. (unspecialized) templates and virtual functions, which is correct due to D supporting separate compilation; specialized templates, however, should work in theory).

Let me quote the first two responses:

"It can't work this way. You can try std.variant."

That is a reply to your mixing (unspecialized) templates and virtual functions, not to your idea of generalizing specialized templates.

That might have been the reply, and it may be valid in a certain context, and may actually be the correct reply in the context I gave(I could have done a better job, I admit),

BUT, if D already implemented such a specialization feature, a different response would have occurred such as: "You need to limit T to be in a finite set", which I would have merrily moved along.

But it tries to force me to in a solution that is not acceptable.

In fact, I was using specialization as `T` could only be from a finite set... but, again, D does not allow me any way to specify that, so how could I properly formulate a solution that would make sense without going in to a lot of detail... a lot of details that I actually don't know because I'm not a full time D aficionado.

The code I posted was a simplification, possibly an oversimplification, of my real code in which I tried to express something I wanted to do, knew that there should be no real technical limitations(in what I wanted, not in how D does it), and thought that D should be able to D it in some way(mainly because it can do just about anything in some way due to it's rich feature set).


and

"It is not possible to have a function be both virtual and templated. A function template generates a new function definition every time that it's a called with a new set of template arguments. [...]"

Same here.

But it's not true... unless you mean that "it is not possible currently in D to do this.

Neither of those statements are logically valid, because it is possible(Only with a restricted number of template parameter values). It is only true about an infinite number, which didn't apply to me since I had a finite number.

Basically an absolute statement is made: something like "All numbers are odd", which is absolute false even if it is partially true. "All odd numbers are odd" is obviously true. One should even clarify, if the context isn't clear so no confusion arise.

"It is not possible to have a function be both virtual and templated."

Surely you disagree with that statement? While there is some ambiguity, since templated functions are actually syntactic sugar while virtual functions are actually coded, we can obviously have a virtual templated function. (Not in D currently, but there is no theoretical reason why it can't exist, we've already discussed on that)

"It is not possible to have a function be both virtual and [arbitrarily] templated." Would, I believe, be a true statement.

while

"It is not possible to have a function be both virtual and [finitely] templated." would be a false statement.

In fact, I bet if you asked Jonathan, what he believed when he wrote that, that he believed it to be true for all cases(finite or not, as he probably never even thought about the finite case enough to realize it matters).

Anyways, we've beat this horse to death! I think we basically agree on the bulk of things, so it's not a big deal. Most of the issue with communication is the lack of clarity and the ambiguity in things(wars have been started and millions of people have died over such things as have many personal relationships destroyed).

I'd like to see such a feature implemented in D one day, but I doubt it will for whatever reasons. Luckily D is powerful enough to still get at a solid solution.. unlike some languages, and I think that is what most of us here realize about D and why we even bother with it.


Reply via email to