On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner wrote:
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote:
[...]

The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).

Why? Don't you realize that the contexts matters and [...]

Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder.

...

Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization".

I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand.

That's true. But I don't see how it matters to much in the current argument. Remember, I'm not advocating using 'in' ;) I'm only saying it doesn't matter in a theoretical sense. If humans were as logical as they should be, it would matter less. For example, a computer has no issue with using `in`, and it doesn't really take any more processing(maybe a cycle, but the context makes it clear). But, of course, we are not computers. So, in a practical sense, yes, the line has to be draw somewhere, even if, IMO, it is not the best place. You agree with this because you say it's ok for parenthesis but not in. You didn't seem to answer anything about my statements and question about images though. But, I'm ok with people drawing lines in the sand, that really isn't what I'm arguing. We have to draw lines. My point is, we should know we are drawing lines. You seem to know this on some significant level, but I don't think most people do. So, what would happen, if we argued for the next 10 years, we would just come to some refinement of our current opinions and experiences about the idea. That's a good thing in a sense, but I don't have 10 years to waste on such a trivial concept that really doesn't matter much ;) (again, remember, I'm not advocating in, I'm advocating anything, but against doing nothing.)



If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time.

Their speed generally does not affect the time one has to spend to understand a piece of code.

Yes, but you are picking and choosing. To understand code, you have to write code, to write code you need a compiler, ide, etc. You need a book, the internet, or other resources to learn things too. It's a much much bigger can of worms than you realize or want to get in to. Everything is interdependent. It's nice to make believe that we can separate everything in to nice little quanta, but we can't, and when we ultimately try we get results that make no sense. But, of course, it's about the best we can do with where humans are at in their evolution currently. The ramifications of one minor change can change everything... See the butterfly effect. Life is fractal-life, IMO(I can't prove it but the evidence is staggering).

I mean, when you say "read code faster" I assume you mean the moment you start to read a piece of code with your eyes to the end of the code... But do you realize that, in some sense, that is meaningless? What about the time it takes to turn on your computer? Why are you not including that? Or the time to scroll your mouse? These things matter because surely you are trying to save time in the "absolute" sense?

e.g., so you have more time to spend with your family at the end of the day? Or spend more time hitting a little white ball in a hole? or whatever? If all you did was read code and had no other factors involved in the absolute time, then you would be 100% correct. But all those other factors do add up too.

Of course, the more code you read the more important it becomes and the less the other factors become, but then why are you reading so much code if you think it's a waste of time? So you can save some more time to read more code? If your goal is to truly read as much code as you can in your life span, then I think your analysis is 99.999...% correct. If you only code as a means to an end for other things, then I think your answer is about 10-40% correct(with a high degree of error and dependent on context).

For me, and the way I "value"/"judge" time, is, how much stuff can I fit in a day of my life that I like to do and how can I minimize the things that I ultimately do not want to do. Coding is one of those things I do not like to do. I do it as a means to an end. Hence, having tooling, IDE's, compilers, etc that help me do what I want to do coding wise as fast as possible(overall) is what is important.

I just think here you are focusing on one tiny aspect of the picture. It's not a bad thing, optimizing the whole requires optimizing all the parts. Just make sure you don't get caught up in optimizing something that isn't really that important(You know this, because you are a coder, but it applies to life too, because are all just "code" anyways).



Maybe the language itself should be designed so there are no ambiguities at all? A single simple for each function? A new keyboard design should be implemented(ultimately a direct brain to editor interface for the fastest time, excluding the time for development and learning)?

I assume you mean "without context sensitive meanings" instead of "no ambiguities", because the latter should be the case as a matter of course (and mostly is, with few exceptions such as the dangling else ambiguity in C and friends). Assuming the former: As I stated earlier, it needs to be worth the cost.

yes, I mean what I called "perceived ambiguities" because true ambiguities are impossible to compile logically, they are "errors".



So, in this case I have to go with the practical of saying that it may be theoretically slower, but it is such an insignificant cost that it is an over optimization. I think you would agree, at least in this case.

Which is why I stated I'm opposing overloading `in` here as a matter of principle, because even small costs sum up in the long run if we get into the habit of just overloading.


I know, You just haven't convinced me enough to change my opinion that it really matters at the end of the day. It's going to be hard to convince me since I really don't feel as strongly as you do about it. That might seem like a contradiction, but

Again, the exact syntax is not import to me. If you really think it matters that much to you and it does(you are not tricking yourself), then use a different keyword.

My proposal remains to not use a keyword and just upgrade existing template specialization.

I think that is a better way too because it is based on a solid principle: https://en.wikipedia.org/wiki/Relational_theory, in a sense. I see it more that things make more sense to the brain the closer those things are in relationship. Space may or may not have absolute meaning without objects, but humans can understand space better when there is stuff inside it. (stuff that relates to space, which I think some call feng shui ;)

You just really haven't stated that principle in any clear way for me to understand what you mean until now. i.e., Stating something like "... of a matter of principle" without stating which principle is ambiguous. Because some principles are not real. Some base their principles on fictitious things, some on abstract ideals, etc. Basing something on a principle that is firmly established is meaningful.




When I see something I try to see it at once rather [...]


To really counter your argument: What about parenthesis? They too have the same problem with in. They have perceived ambiguity... but they are not ambiguity. So your argument should be said about them too and you should be against them also, but are you? [To be clear here: foo()() and (3+4) have 3 different use cases of ()'s... The first is templated arguments, the second is function arguments, and the third is expression grouping]

That doesn't counter my argument, it just states that parentheses have these costs, as well (which they do). The primary question would still be if they're worth that cost, which imho they are. Regardless of that, though, since they are already part of the language syntax (and are not going to be up for change), this is not something we could do something about, even if we agreed they weren't worth the cost. New syntax, however, is up for that kind of discussion, because once it's in it's essentially set in stone (not quite, but *very* slow to remove/change because of backwards compatibility).

Well, all I can really say about it is that one can't really know the costs. I've said that before. We guess. Hence, the best way out of this box is usually through experiment. We try something and see how it feels and if it seems to work. I'm taking about in general
[...]

Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.

Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.

As I countered that in the above, I don't think your rebuttal is valid.

Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;)

Not as far as I see it, though I'm willing to agree to disagree :)


I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other.

It's not an absolute restriction, it's an absolute position from which I argue against including such overloading on principle. If it can be overcome by demonstrating that it can't sensibly be done without more overloading and that it adds enough value to be worth the increases overloading, I'd be fine with inclusion.

[...]

To simplify it down: Do you have the sample problems with all the ambiguities that already exist in almost all programming languages that everyone is ok with on a practical level on a daily basis?

Again, you seem to mix ambiguity and context sensitivity.
W.r.t. the latter: I have a problem with those occurences where I don't think the costs I associate with it are outweighed by its benefits (e.g. with the `in` keyword overloaded meaning for AA's).

Not mixing, I exclude real ambiguities because have no real meaning. I thought I mentioned something about that way back when, but who knows... Although, I'd be curious if any programming languages existed who's grammar was ambiguous and actually could be realized?

So, my "[perceived] ambiguity" is your context sensitivity. But I was more trying to hint at how an arbitrary human my be confused by seeing the same thing used in to different contexts having two different meanings. They tend to just see them as ambiguities at first and are confused, until they learn the context, in which case the ambiguities no longer exist. They weren't real ambiguities in the first place but they "perceived" them as such. Usually context sensitivity, in the context of programming languages, has a very specific interpretation so I didn't want to use it.







[...]

Why do you think that? Less than ten people have participated in this thread so far.

I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. [...]

Fair enough, though personally I'd need to see empirical proof of those general claims about human behaviour before I could share that position.

Lol, you should have plenty of proof. Just look around. [...]

Anectodes/generalizing from personal experiences do not equate proof (which is why they're usually accompanied by things like "in my experience").

There is no such thing as proof in life. If their was, we'd surely have something close to it in life. At best, we have mathematical proof. It might be that existence is mathematical(seems to be so, as mathematics can be used to explain the relationships between just about anything). But human behavior is pretty typical and has patterns. Just like most phenomena. As much as humans have discovered bout life, the general pattern is that the more we learn the more see that there are underlying factors that generate these patterns.

And so, if you believe in these pattern oriented nature of life(which is more fractal like/self similarity) you can start connecting dots. It may turn out that you connected them wrong, but that's a step in the right direction. You reconnect things and learn something new. Draw a new line...

Look how children behave. You remember how you were a child, the things that went on. Do you think that once a human "grows" up that somehow they change and grow beyond those behaviors? Or is it more logical that those behaviors just "morph" in to 'new' behaviors that are really just functions of the old? When you realize that people are just children that have older bodies and more experiences, you start to see patterns. E.g., politicians. They are just certain children, You might have had a friend that, now that you look back, could say that he was a "politician". (or whatever). Grown up behavior is just child behavior that is grown up. It is not completely different.

The same is said of programming languages. Programming languages just do jump from one thing to another but evolve in a continuous way. What we experience now is the evolution of everything that was before. There are no holes or gaps or leaps. It is a differential function, so to speak(but of, probably, an infinite number of dimensions). Everything is connected/related.

Anyways, I think we are starting to drift in the weeds(but that is usually were the big fish hide!) ;)



I'd like to see such a feature implemented in D one day, but I doubt it will for whatever reasons. Luckily D is powerful enough to still get at a solid solution.. unlike some languages, and I think that is what most of us here realize about D and why we even bother with it.

Well, so far the (singular) reason is that nobody that wants it in the language has invested the time to write a DIP :p

Yep.

I guess the problem with the D community is that there are no real "champions" of it. Walter is not a King Arthur. I feel that he's probably lost a lot of his youthful zeal that is usually what provides the impetus for great things. Many of the contributors here do not make money off of D in any significant way and hence do it more as a hobby. So the practical side of things prevent D from really accomplishing great heights(at this point). I hope there is enough thrust for escape velocity though(My feeling is there isn't, but I hope I'm wrong). Like you say about saving cycles reading, well, if D isn't going to be around in any significant way(it starts to stagnate in the next few years), then my investment of time will not be very rewarded. I've learned a lot of new programming things and some life stuff from it so it's not a total lost, but it will be a shame(not just for me)..

What we know for sure, if D does progress at a specific "rate", it will be overtaken by other languages and eventually die out. This is a fact, as it will happen(everything dies that lives, another "over generalization" born in circumstantial evidence but that everyone should be able to agree on...). D has to keep up with the Kardashians if it want's to be cool... unfortunately.




Reply via email to