On Thursday, 30 August 2018 at 00:10:42 UTC, Paul Backus wrote:
On Wednesday, 29 August 2018 at 22:18:09 UTC, Everlast wrote:
No it is not! you have simply accepted it to be fact, which doesn't make it consistent.

If you take 100 non-programmers(say, mathematicians) and ask them what is the natural extension of allowing an arbitrary number of parameters knowing that A is a type and [] means array and ... means an arbitrary number of, they will NOT think A[]... makes sense.

Has anyone actually done such a survey? If not, how can you possibly be sure what the result will be?

If I'm wrong, then you have to prove why a syntax such as bar(int a...) cannot be interpreted singularly in the way I have specified.

Of course, there's no inherent reason why `bar(int a...)` couldn't be interpreted the way you've specified. But there's also no reason why we couldn't use `bar(int... a)`, like Java, or `bar(params int[] a)`, like C#, or any of the other syntaxes you can see if you visit the Rosetta Code page on variadic functions. [1]

All programming languages are artificial. All syntax is arbitrary. What feels "natural" to you is not a universal truth, but a matter of personal taste and preference. To an experienced Scheme programmer, `(foo . args)` probably feels more natural than any of the examples I've mentioned. To an experienced APL programmer, the entire idea of a separate syntax for variadic functions probably sounds ridiculous.

This is not true! You claim that I'm making a blanket statement about what mathematicians would view then you do the same.

Not everything in the universe is arbitrary(if so, prove it! ;)

In any case, even if you were right, there is still a partial ordering of nationality based on other things that are pre-existing.


Personally, I find the most pragmatic approach to be "when in Rome, do as the Romans do." So when I'm programming in D, I write `foo(int[] a...)`, and when I'm programming in Python, I write `foo(*args)`, and when I'm programming in C, I write `foo(...)` and `#include <stdarg.h>`. If your goal is to solve actual problems, arguing about syntactic minutiae is a waste of time.

This may be true but it also leads to doing what Romans do such as wiping their asses which rocks. It is only necessary if you don't have anything else but it doesn't mean there isn't a better way. The only reason why programing languages allow flaws(that then become "Features") is for "backwards compatibility".

To put this to rest: I'll make a D fork where it simply requires all characters of the input to be duplicated(so it simply removes every other character in the source then passes it to the D compiler)...

You wouldn't claim that the fork is fine because "Do as the Romans do" logic would you? You would say that it is a pointless syntax... Hence, you would think just like I'm thinking about the []. On some meaningless level you can claim the fork is valid, but you wouldnn't program in it for obvious reasons.

So, just because [] is a much less obvious issue doesn't change the fact that, at least so far since you haven't pointed out any valid reasons why it is necessary, that it is the same fundamental problem as the D fork described above.

In both cases they are valid syntaxes w.r.t. to the D compiler and fork resp. and in both cases they are unnecessary syntaxes that offer nothing new and over complicate things. The the only reason they seem different is because variadic types are not used as often as having to double every character.

Also, we are talking about the semantics of ... and not [] ultimately. My point is that by interpreting ... properly there is no reason to express [] and that this is a D flaw in over complicating the syntax for no good reason.

If we are just talking about what D requires then there is nothing to talk about... and that should be obvious.







Reply via email to