Re: Should 'in' Imply 'ref' as Well for Value Types?
On Saturday, 5 May 2018 at 15:39:19 UTC, Jonathan M Davis wrote: On Saturday, May 05, 2018 15:22:04 Bolpat via Digitalmars-d wrote: On Friday, 4 May 2018 at 09:34:14 UTC, Jonathan M Davis wrote: > [...] > It's actually not infrequent now that in C++, you want to > pass > stuf by value rather than const& precisely because move > semantics can be used to avoid copies. So, it's not at all > necessarily the case that passing by ref is the efficient > thing > to do. It's heavily dependent on the code in question. I once proposed that `in` can mean `const scope ref` that also binds rvalues. https://github.com/dlang/DIPs/pull/111#issuecomment-381911140 We could make `in` be something similar to `inline`. The compiler can implement it as stated above (assign the expression to temporary, reference it), or use copy if copy is cheaper than referencing. Having ref of any kind accept rvalues is a highly controversial issue, and it may or may not ever be in the language. If it's added, then at that point, whether it makes sense to make in imply ref could be re-examined, and maybe at that point, doing so would make great sense. But as long as ref does not accept rvalues, it really doesn't make sense. It would break too much code and would be far too annoying to use in many, many cases where it is currently commonly used. I never suggested some spelled out `ref` should bind rvalues. Having explicit `ref` bind rvalues is a mistake C++ did and it confuses the hell out of people. For clarification: `in` should mean for the called function that it may only "look at" the information for decisions, but not "touch" it. In this sense, not only modifying, also copying or leaking it are a forms of touching. Note that objects of a non-copyable types can be looked at. This is how I came to what `in` naturally must mean. It must mean `const`. It must mean `scope`. It must mean referencing, but not in the restrictive way of `ref`, but in the permissive interpretation, so it may bind rvalues, too. For the caller, `in` basically is `const scope` with guaranteed no copying. So, `in` should not imply `ref`, it should imply referencing, which is not the same thing. `in` and `ref` could be combined, so that the restrictive character of `ref` does its job, but I'd favor not to allow that (similarly `out ref` is not allowed). If you want `const scope ref`, spell it out. I'd assume the cases where you want to allow lvalues only are rare. They well may exist, so it's good to be able to express them.
Re: Tuple DIP
On Sunday, 14 January 2018 at 00:01:15 UTC, rikki cattermole wrote: On 13/01/2018 11:45 PM, Timothee Cour wrote: some people have suggested using `{a, b}` instead of `(a,b)` ; this would not work because of ambiguity, eg: `auto fun(){ return {}; }` already has a meaning, so the empty tuple would not work. so `()` is indeed better. Easy fix, tuples must have a length greater than 0. A tuple with length 0 is by definition void. Zero tuples exist and don't have type void as their type has an object: the empty tuple. It's similar to the empty word, the empty array, etc. They naturally arise in corner cases of templates. You have to support them like static arrays of length 0. Effectively forbidding them would be an unreasonable limitation.
Re: Tuple DIP
On Friday, 12 January 2018 at 22:44:48 UTC, Timon Gehr wrote: [...] This DIP aims to make code like the following valid D: --- auto (a, b) = (1, 2); (int a, int b) = (1, 2); --- [...] How is (1, 2) different from [1, 2] (static array)? It makes no sense to me to have both and probably a bunch of conversion rules/functions. Why don't you consider extending (type-homogeneous) static arrays to (finite type enumerated) tuples? It solves - 1-tuples - comma operator vs. tuple literal instantly. You'd have T[n] as an alias for the tuple type consisting of n objects of type T. I've written something about that here: https://forum.dlang.org/post/wwgwwepihklttnqgh...@forum.dlang.org (sorry for my bad English in that post) The main reason I'd vote against the DIP: Parenthesis should only be used for operator precedence and function calls.
Static If with Declaration
When I wanted something like static if (enum var = expr) { ... } I did static foreach (enum var; { auto x = expr; return x ? [ x ] : [ ]; }()) { ... } The only drawback is, there is no `else`. You can use the trick even for normal if when the condition is not identical to the expression of the declared variable: if (auto var = expr) // tests cast(bool)x { ... } Same solution foreach (var; { auto x = expr; return cond ? [ x ] : [ ]; }()) { ... } Drawback apart from not having else: May allocate if the compiler doesn't optimize it. Even then, the code is not @nogc. One would use std.range.only for that: foreach (var; { import std.range : only; auto singleton = only(expr); if (!cond) singleton.popFront; return singleton; }()) { ... } This can be archived, too, by using std.iteration.filter: import std.range : only; import std.iteration : filter; foreach (var; singleton.only.filter!cond) { ... } Has anyone encountered something similar? Note that most of the time, you can put the declaration before the test. You cannot in mixin templates, where I needed it.
Re: Inheritance from multiple interfaces with the same method name
On Thursday, 7 December 2017 at 23:00:38 UTC, Mike Franklin wrote: If you think D should support something like this, the first thing to do is to file a bug report. Sounds more like a DIP to me. There is no way to enable this without some kind of nontrivial syntax. I'd go with the VB approach and have something like void foo1() alias I.foo { } but that's up to the time discussing the DIP.
Re: Inheritance from multiple interfaces with the same method name
On Thursday, 7 December 2017 at 15:14:48 UTC, Adam D. Ruppe wrote: On Thursday, 7 December 2017 at 00:45:21 UTC, Mike Franklin wrote: // Error: A.f called with argument types () matches both: A.f() and A.f() // Yeah, that error message could be better. //a.f(); (cast(I)a).f(); // prints "void f()" (cast(J)a).f(); // prints "int f()" D also allows you to simply write: a.I.f(); a.J.f(); also works for explicitly calling a base class implementation btw This implies that I cannot implement two syntactically identical methods with different implementations, like if J had void f(); too, I cannot have different implementations for I.f() and J.f(). That would be relevant if they should behave differently e.g. if they have conflicting contracts.
Inheritance from multiple interfaces with the same method name
Say I have two interfaces interface I { void f(); } and interface J { int f(); } implemented by some class class A : I, J { // challenge by the compiler: // implement f()! } VB.NET allows that by renaming the implementation (it does allow it generally, not only in the corner case). C# allows that by specifying the target interface when implementing (can be omitted for exactly one; corner case handling); the specification makes the implementation private. (See [1]) Java just disallows the case when two methods are incompatible. If they are compatible, they must be implemented by the same method. If they are meant to do different things, you are screwed. What is D's position on that? The interface spec [2] does not say anything about that case. [1] https://stackoverflow.com/questions/2371178/inheritance-from-multiple-interfaces-with-the-same-method-name [2] https://dlang.org/spec/interface.html
Re: Proposal: Object/?? Destruction
On Monday, 16 October 2017 at 23:29:46 UTC, sarn wrote: On Sunday, 15 October 2017 at 15:19:21 UTC, Q. Schroll wrote: On Saturday, 14 October 2017 at 23:20:26 UTC, sarn wrote: On Saturday, 14 October 2017 at 22:20:46 UTC, Q. Schroll wrote: Therefore, and because of brackets, you can distinguish f(1, 2) from f([1, 2]). But in f([1, 2]), it's ambiguous (just by parsing) whether [1, 2] is a tuple literal or a dynamic array literal. It would be a tuple if that's the best match, otherwise conversion to int[] is tried. ... You'd need to use a prefix or something to the bracket syntax. [snip] I just argued, you don't! But have you thought through all the implications? Yes. No weirdness is being introduced that is not there already. Maybe I have overseen something; I will not give you or anyone else a guarantee for the solution to work perfectly. I've thought through the case very long. An open question is allowing partly const/immutable/shared (cis) tuples. As for now, I didn't care. Even c/i/s-homogeneus tuples (the tuple is c/i/s as a whole or not) would be a win in my opinion. One rarely needs tuples with one component immutable but the other one mutable. This is what a named struct is for. On the other hand, I don't know of any issues having a to partly c/i/s std.typecons.Tuple. Take this code: void main(string[] args) { import std.stdio : writeln; writeln([1, 3.14]); } As you're probably 100% aware, this is totally valid D code today. [1, 3.14] becomes a double[] because 1 gets converted to a double. Right conclusion with insufficient explanation. [1, 3.14] is a static array in the first place. It occupies a fully inferred template parameter position. I don't know the implementation, but every time I tested, it behaves as if typeof(expr) is being used after the bang to set the template argument manually (even for Voldemort types etc. where typeof is sometimes impossible due to missing frame pointers). typeof returns "dynamic array of T" for array literals. This is all the weirdness going on here. It is present today and would remain present if you interpret [1, 3.14] as a tuple. If this kind of behaviour changes, code will break, so you'll need a bunch of exceptions to the "it would be a tuple if that's the best match" rule. The only exception is typeof and (therefore, I don't know...) template inference. Also, for the same backwards compatibility reasons, it would be impractical in most cases to add any tuple overloads to most existing standard library functions that currently accept slices or arrays, but presumably new functions would be meant to take advantage of the new syntax (else there wouldn't be much point creating a new syntax). You don't have to as long as you don't want to support tuples explicitly; otherwise you have to. If you have a void f(int, double), you cannot plug in [1, 3.14]. You can use some expand to do it. You wouldn't want to either. If you have something *explicitly typed* as a tuple, e.g. [int, double] tup = [1, 3.14]; you can make the call f(tup) because auto expansion does its job. This is the use case. If you have void f([int, double]), you can plug in tuple literals. If you use a tuple literal for a function call, the compiler will search for explicit matches for tuples. If it cannot find any, conversion to a dynamic array happens. So, a literal like [1, 3.14] would basically be a tuple, but would be converted to double[] in a bunch of special cases for historical reasons. Yes. It would be converted in almost all cases -- the same with static arrays -- because the best match doesn't occur very often and typeof never returns static arrays or tuples for literals. If you're not sure if this is really a problem, take a look at the confusion caused by the magic in {} syntax: https://forum.dlang.org/thread/ecwfiderxbfqzjcyy...@forum.dlang.org https://forum.dlang.org/thread/ihsmxiplprxwlqkgw...@forum.dlang.org https://forum.dlang.org/thread/qsayoktyffczskrnm...@forum.dlang.org This is completely unrelated. Concerning the issues people have with (..) => { .. }, I've filed an enhancement request to deprecate it in that specific case: https://issues.dlang.org/show_bug.cgi?id=17951 To be totally honest, I still don't see what's wrong with just creating a new bracket syntax, instead of adding more magic to [] (or () for that matter). It's not adding any magic to [] that isn't there already. The other proposals are adding magic to (). Even some mathematicians use chevrons (angle brackets) for tuples as they see parentheses as indicators of precedence. I'd vote against angle brackets, see C++ templates for reasons. Logicians and haskellers even don't need parentheses for function calls. Could I convince you?
Re: Proposal: Object/?? Destruction
On Saturday, 14 October 2017 at 23:20:26 UTC, sarn wrote: On Saturday, 14 October 2017 at 22:20:46 UTC, Q. Schroll wrote: Therefore, and because of brackets, you can distinguish f(1, 2) from f([1, 2]). But in f([1, 2]), it's ambiguous (just by parsing) whether [1, 2] is a tuple literal or a dynamic array literal. It would be a tuple if that's the best match, otherwise conversion to int[] is tried. Even today, [1, 2] is ambiguous: Is it a static or a dynamic array of int? Is it of type int[2] or int[]? The spec says, it depends what you do with it! We can progress that and enlarge the int[2] version to [int, int] -- a special case of a 2-tuple. It remains the same: If [1, 2] can be used as a dynamic array, it will be. If not, the compiler tries a static array. With tuples, it would try a tuple. If f has an overload taking int[] or something similar, it will treat [1, 2] as a dynamic array with homogeneus types. If the objects are not compatible, an error occurs like "tuple [..contents..] cannot be implicitly converted to T[]". Else, if it has an overload for a compatible (length, implicit conversion) tuple, that will be taken. Consider void f(int[2] v) { } // (1) void f(int[ ] v) { } // (2) Here, f([1, 2]) calls (1) as it is the better match. Yet with auto x = [1, 2]; f(x) calls (2) because of strict typing. So while [1, 2] is of type int[2] or [int, int] as a tuple, typeof([1, 2]) will still yield int[]. You cannot ask the one and only correct type of []-literals as they have more than one type. Even if the values are incompatible like [1, "a"], asking typeof([1, "a"]) will result in an error, because in typeof deduction, []-literals must result in dynamic arrays. This holds for auto, because auto has the same rules. auto tup = [1, "a"]; must fail. You'd need [auto, auto] tup = [1, "a"]; or maybe some shorthand syntax that lowers to this. You'd need to use a prefix or something to the bracket syntax. [snip] I just argued, you don't! The reason there is no such prefix and not even a function in Phobos, is it is a trivial task to make one. T[n] s(T, size_t n)(T[n] elem ...) { return elem; } static assert(is(typeof(s(1, 2, 3)) == int[3])); static assert(is(typeof([1, 2, 3].s) == int[3])); auto x = s(1, 2, 3); static assert(is(typeof(x) == int[3])); auto x = s(1, 2.0, 3); static assert(is(typeof(x) == double[3])); Try it yourself. It works fine. Instead of s one would use t or tuple to allow incompatible types.
Re: Proposal: Object/?? Destruction
I've thought about tuples and stuff for a while. For tuples, I'll use [brackets]. Reasons follow. Homogeneous tuples are repetitions of some single type. We have them today in form of static arrays. We could allow "inhomogeneous arrays" and call them tuples. T[n] is then an alias for [T, T, .., T] with n repititions. In place of a type [T, S] means Tuple!(T, S) and in place of an Object [t, s] means tuple(t, s). Note that D's grammar allows disambiguate types and objects by syntax. A tuple implicitly converts to some other, if the pointwise types do. Bracket literals constitute a separate type that exists in the compiler only. We have that already to make int[2] a = [ 1, 2 ]; not allocate on the heap, but int[] a = [ 1, 2 ]; does. So at first, [ 1, 2.0 ] is of type [int, double]. If you assign it to a double[2], because int -> double, the conversion is no problem. The thing that changes, is when you ask for typeof([ 1, 2.0 ]) directly. Of course, auto tup = [ 1, 2.0 ]; will homogenize the tuple to double[] similar to how it does today. Declaration-decomposition can be done as auto [a, b] = f(x); (non-exlusive) or [auto a, auto b] = f(x); The first one is shorter, the latter one let's you do [int a, auto b] = f(x); So auto [x1, .. xn] is just a shorthand for [auto x1, .. auto xn]. Assignment-decomposition is the same with the types/auto missing. Swap can be done with [a, b] = [b, a]; From the type system, if a tuple literal has only lvalues inside, it is an lvalue, too. Note that there must be some way to handle side effects correctly. The problem is already known from normal assignment. 1-tuples are in a natural way included. int[1] is today different from int. When we have first-class tuples in D, we should not distinguish static arrays from homogeneous tuples. Therefore, and because of brackets, you can distinguish f(1, 2) from f([1, 2]). I find the syntax (1,) for 1-tuples weird. Parenthesis are used only for operator precedence and function calls. They should not be used for tuples -- the 1-tuple case and f(1, 2) vs f((1, 2)) prove that. Parenthesis are a tool of syntax, not semantics. You can never omit brackets in D. Maybe you can use some of that as input for your DIP.
Re: Implicit Constructors
On Friday, 13 October 2017 at 14:50:44 UTC, Adam D. Ruppe wrote: [snip] But actually, I really wish D just had implicit ctors on the types themselves. I think C++'s mistake was that implicit was the default, and you have to write `explicit`. If we did the opposite, where implicit was opt in, I think it would be useful without the worry C++ had. Not completely. Walter and Andrei oppose even explicitly annotated implicit constructors [1, 2]. With my solution, you state a two sided desire, like offer and acceptance. Contrary to (even explicitly annotated, non-default) implicit constructors being the only necessity for getting implicit constructor calls, this makes it very transparent what's happening. The only exception is when `S` is in a library, you use @implicit(0) on it, and the supplier decides to add more @implicit constructors to `S`. This is what we (will) have @future [3] for. [1] https://issues.dlang.org/show_bug.cgi?id=4875#c5 [2] https://issues.dlang.org/show_bug.cgi?id=7019#c8 [3] https://github.com/dlang/DIPs/blob/master/DIPs/DIP1007.md
Re: Implicit Constructors
On Friday, 13 October 2017 at 13:01:48 UTC, Steven Schveighoffer wrote: On 10/12/17 7:57 PM, Q. Schroll wrote: We have some sort of implicit construction already. Weirdly, it's reserved for classes. Just look at this: class C { this(int x) { } } void foo(C c ...) { } void main() { foo(0); } If you put @nogc in front of ctor and functions, the compiler tells you not to use 'new' in main while you actually don't. Merely the compiler inserts it for you to complain about it. Not sure where you put the @nogc. class C { this(int x) @nogc { } } void foo(C c ...) @nogc { } void main() @nogc { foo(0); } It tells you not to use 'new' while you don't (explicitly, at least). What is likely happening is that the call to foo is lowered to foo(new C(0)). Indeed, using -vcg-ast proves it. Probably. I don't care -- the compiler should not give me this error message. I've filed a bug report, but I cannot find it anymore. The spec says it can put the class on the stack, but is not required to. Exactly. It shouldn't work and doesn't. That's not the problem. One could propose to extend the three-dots notation to structs. I don't. The fact that this is not supported (it isn't, I tried it) doesn't make any sense. It tried once, too. It's likely this hails from a time where classes had ctors and structs did not, and is just not a feature that anyone cared about or used. IMO, it should be extended for structs just in terms of consistency. But I don't think it would be a high priority. That would be another consistent solution. Even if we had this for structs, there is the @nogc argument not to allow it for classes (the compiler inserts nontrivial things: the heap allocation). I'd vote for deprecating the three-dots for classes. Did you know it exists? Did you use it - like ever? Does anyone depend on it? I'm mixed on it. I wouldn't care personally if it was removed, but it's a feature that may be used somewhere, and there's no harm in keeping it. Even extending this to structs does not give you implicit ctor calls. You can use ... only for the last parameter for obvious reasons. It's completely different from implicit ctor calls. I only mentioned that as it is the closest thing in D to implicit ctor calls. [snip] It's a neat idea. I don't see why we would need to remove the typesafe variadics to allow this to work. You don't. I mentioned it as it is somehow implicit ctor call. It *really* would be nice though, to allow annotations on parameters. The @implicit(1) stinks. Would look much better as: proto_goo(int v, @implicit S s, bool b); I tried that, too, and failed because of that. (I'd even assume anyone would, because it'd be the obvious way to want it.) This is another reason to allow that. Where you may run into trouble is if there is ambiguity (for instance 2 implicit parameters could match the potential arguments in different ways). How? I only accept *one* parameter. Ctors with more than one parameter are disallowed. One could allow those which can be called with one parameter because they fill the rest with default values. I didn't for the sake of an easier implementation. It's a first sketch, a proof of concept. Another option is to not worry about tagging which parameters would be implicit, and go only on the fact that types in the parameter list have @implicit constructors when you call implicitOverloads. There are two reasons against it. 1. implicitOverloads would search much more for nothing. 2. You'd add implicit overloads the author of the function maybe wouldn't want. You can think of my system as offer and acceptance. You need both. @implicit ctors do nothing for themselves the same way @implicit(1) does nothing if the targeted type has nothing to offer. That's on purpose to make implicit ctor calls as transparent as possible. Walter didn't want implicit construction because it is non-transparent. Under these circumstances, it has good chances to be accepted for Phobos.
Implicit Constructors
We have some sort of implicit construction already. Weirdly, it's reserved for classes. Just look at this: class C { this(int x) { } } void foo(C c ...) { } void main() { foo(0); } If you put @nogc in front of ctor and functions, the compiler tells you not to use 'new' in main while you actually don't. Merely the compiler inserts it for you to complain about it. One could propose to extend the three-dots notation to structs. I don't. I'd vote for deprecating the three-dots for classes. Did you know it exists? Did you use it - like ever? Does anyone depend on it? (If you don't want to read it all: The examples may be expressing enough.) The main point of this post is a library solution to implicit constructor calls. The implementation is very conservative: A double handshake; not the constructors must be annotated with @implicit, the functions which want to allow being called with a constructor parameter must explicitly state that (these functions are called "receiving" functions). @implicit constructors must have exactly one parameter (no defaulted additional ones) and a receiving function has an annotation @implicit(i) where i is the index of a parameter for which it will be allowed to plug in a constructor argument of its type. Sounds complicated? See an example. struct S { import bolpat.implicitCtor : implicit; long s; @implicit this(int x) { s = x; } @implicit this(long x) { s = x; } this(bool x) { s = x ? 0 : -1; } } This is all that you need from the one side. Now the receiver side. import bolpat.implicitCtor : implicit, implicitOverloads; long proto_goo(int v, S s, bool b) @implicit(1) { import std.stdio : writeln; writeln("goo: call S with value ", s.s); return b ? v : s.s; } void proto_goo(char c) { } // no @implicit(i) ==> will be ignored mixin implicitOverloads!("goo", proto_goo); // generates goo assert(goo(1, 2, false) == 2); It also works for members. See: struct Test { int proto_foo(int v, S s) @implicit(1) { import std.stdio : writeln; writeln("foo: call S with value ", s.s); return v; } void proto_foo(char c) { } // ignored mixin implicitOverloads!("foo", proto_foo); } What to do further? Make @implicit take more than one argument. I'm working on it. This is just a first taste. And for Stefan Koch, thanks to static foreach, one can safe so many templates. tl;dr the implementation is here: https://github.com/Bolpat/dUtility/blob/master/bolpat/implicitCtor.d
Re: enum pointers or class references limitation
On Friday, 1 September 2017 at 23:13:50 UTC, Q. Schroll wrote: [..] Just as Scott Meyers said: make it easy to use correctly and hard to use incorrectly. Today it's easy to use incorrectly. While enum foo = [1,2,3]; assert(foo is foo); fails, enum bla = "123"; assert(foo is foo); passes. Enhancement request submitted: https://issues.dlang.org/show_bug.cgi?id=17799 Unfortunately after I found out the second one does not have to do with mutability. Making foo immutable(int)[] does not change anything. It only works for const(char)[], immutable(char)[], and probably w/dchar friends. That's odd.
Re: enum pointers or class references limitation
On Friday, 1 September 2017 at 21:08:20 UTC, Ali Çehreli wrote: [snip] > assert(!([1,2,3] is [1,2,3])); > > Which is exactly what enum expands to and totally expected. Where is the > surprise? This is not a surprise. Array literals are not identical. In the surprising case foo is a symbol, seemingly of a variable. Failing the 'is' test is surprising in that case. I've just remembered that the actual surprising case is the following explicit check: assert(!(foo.ptr is foo.ptr)); // Passes I find it surprising because it looks like an entity does not have a well-behaving .ptr. (Aside: I think your code might be surprising to at least newcomers as well.) That's a good reason to unrecommend/disallow enums with indirections. The compiler should recommend/suggest using static immutable instead as it does not have such oddities. The only advantage of enum is being guaranteed to be known at compile-time and they can be templatized (can be also done for static immutable via eponymous template). I'd vote for a warning/error when the type of an enum has indirections together with a pragma to switch the warning off for the rare case you know exactly what you do. Just as Scott Meyers said: make it easy to use correctly and hard to use incorrectly. Today it's easy to use incorrectly.
Structs as Keys for AAs
In [1] it says at 5. that For this reason, and for legacy reasons, an associative array key is not allowed to define a specialized opCmp, but omit a specialized opEquals. This restriction may be removed in future versions of D. I'm not completely sure what that means. Does "specialized" mean "user-defined"? I just challenged the spec and found an error by the way: [2]. Apart from that, it compiles. For 5. I used struct Key { int id; string tag; int opCmp(const Key other) const { return this.id < other.id ? -1 : this.id == other.id ? 0 : 1; } bool opEquals(ref const Key other) const @safe pure nothrow { return this.id == other.id; } size_t toHash() const @safe pure nothrow { return id; } } as a key type. To me the part "is not allowed to define a specialized opCmp" is clearly wrong, either a compiler bug or an error in the spec. Concerning opEquals and opCmp in general: Why isn't opEquals lowered to opCmp returning 0 if not present? [1] https://dlang.org/spec/hash-map.html#using_struct_as_key [2] https://github.com/dlang/dlang.org/pull/1861
relax disabled Final!T unary operators
In std.experimental.typecons.Final the operators ++ and -- are disabled. I suspect, this was done with simple types as int in mind, where increment is nothing different from += 1, which is by definition an assignment. From the distant standpoint, there is no reason to disable them at all. They are modifying the object, but calling a modifying method on a Final! struct/class is not disabled either. An operation need not be disabled because it is equivalent to an assignment. I agree that this behavior is natural. Final!int should not be modifiable at all, that's precisely what immutable int does. This leads to the following conclusion: Making Final an alias to immutable for types without indirections. They cast implicitly to mutable. E.g. Being allowed to assign the components of Tuple!(int, int), but not the tuple itself is ridiculous because that's the same. My conclusion: Final only makes sense for things that have indirections. The things are handled similarly for overloading opAssign for classes. Identity assignment is disallowed to overload, but not other forms, because other forms are just modifying operations. The reason why any form of assignment should be disallowed is simple: It is confusing and we would have to determine which assignments to allow. The Final implementation cannot know which assignment to a struct behaves like modifying. Allowing non-identity class assignment is an option, but I'm against it: It's still an assignment by definition.
Re: `in` no longer same as `const ref`
On Monday, 30 January 2017 at 12:08:06 UTC, Olivier FAURE wrote: On Monday, 30 January 2017 at 06:38:11 UTC, Jonathan M Davis wrote: Personally, I think that effectively having an alias for two attributes in a single attribute is a confusing design decision anyway and think that it was a mistake, but we've had folks slapping in on stuff for years with no enforcement, and flipping the switch on that would likely not be pretty. - Jonathan M Davis I've always thought of 'in' as a visual shorthand for "this parameter doesn't care whether you give it a deep copy or a shallow reference", personally. Would have been a far better definition. Why does anyone really need a shorthand attribute for two attributes that could be easily spelled out? You can type anything for "const scope" while programming and then do search-and-replace. That's even trivial. Can't we make "in" mean "const scope ref", that binds on r-values, too? Effectively, that's (similar to) what "const T&" in C++ means. It's a non-copying const view on the object. We have the longstanding problem, one must overload a function to effectively bind both l- and r-values. That is what I'd suppose to be the dual to "out".
Re: Should debug{} allow GC?
On Sunday, 11 September 2016 at 07:46:09 UTC, Manu wrote: I'm having a lot of trouble debugging @nogc functions. I have a number of debug functions that use GC, but I can't call them from @nogc code... should debug{} allow @nogc calls, the same as impure calls? Generally, there is more to consider. It makes no sense to allow impure debug inside a pure function and not to do so for other attributes. For nothrow, it is also quite annoying. If no one has strong counterarguments, just file an enhancement request. Implementation of such should not be too difficult.
Structural Function Attributes
I'm pretty sure, someone before me has thought about that. Take pure as an example, but you can replace it by any subset of pure, nothrow, @safe and @nogc. Main reason: Assume a struct with simple opApply struct R { // pure -> error: dg possibly impure int opApply(scope int delegate(size_t i, ref int x) dg) { ... } } which does only pure operations modulo calls of dg. Then, opApply is pure if dg is pure. Further assume a function using that opApply int wannaBePure(R r) // pure -> error: R.opApply possibly impure { int s = 0; foreach (i, x; r) s += i*x; return s; } PROBLEM 1: Nobody wants to supply all possible combinations of attributes the delegate can have. Worse, if the struct R is a template, it is nearly impossible for e.g. @nogc. -> Use a template opApply to infer attributes. PROBLEM 2: Specific to opApply, you cannot infer the foreach types if opApply is a template; even a trivial one or opApply(DG : int delegate(size_t, ref int))(scope DG dg) does not do this. -> Template is an improper solution. PROBLEM 3: Templates cannot be virtual. -> Template is not a solution at all. PROPOSED SOLUTION: For each (strong) attribute like pure, support a (weak) structural one, meaning that the function is pure if the parameters are. PLUS: Always infer structural attributes for functions with delegate/function parameters, but let the explicit notion be possible, as the programmer could want to guarantee structural pureness, cf. attribute inference on function templates can be denoted explicitly. FUN FACT: The structural attribute is actually not that weak. It makes (maybe many) functions pure, that wouldn't be otherwise. Back to the example: struct R { struct(pure) // dg pure -> opApply pure int opApply(scope int delegate(size_t i, ref int x) dg) { ... } } int wannaBePure(R r) pure // pure opApply -> pure { int s = 0; foreach (i, x; r) s += i*x; // pure stuff -> pure delegate -> pure opApply return s; } From what I see, struct(pure) is perfectly sound with inheriting and overwriting. To the overloading set, it can be seen as supplying both versions. As the code is not affected (only optimizing), this should be easy to handle in object code. The function has to be marked as sturct(whatever). The function must be compiled as it wouldn't have the attribute. Outside, it can be decided at compile-time. Ideas?
Re: Idea: swap with multiple arguments
On Monday, 23 May 2016 at 20:27:43 UTC, Steven Schveighoffer wrote: On 5/23/16 4:01 PM, Andrei Alexandrescu wrote: So swap(a, b) swaps the contents of a and b. This could be easily generalized to multiple arguments such that swap(a1, a2, ..., an) arranges things such that a1 gets an, a2 gets a1, a3 gets a2, etc. I do know applications for three arguments. Thoughts? -- Andrei One thing that screams out to me: this should be called rotate, not swap. -Steve Just name Andrei's function rotate and make swap just an alias of it with exactly two parameters. No confusion and everyone is happy.
Re: Walter's Famous German Language Essentials Guide
On Wednesday, 27 April 2016 at 03:59:04 UTC, Seb wrote: On Wednesday, 27 April 2016 at 02:57:47 UTC, Walter Bright wrote: To prepare for a week in Berlin, a few German phrases is all you'll need to fit in, get around, and have a great time: 1. Ein Bier bitte! 2. Noch ein Bier bitte! 3. Wo ist der WC! nitpick: Wo ist _das_ WC? In German WC we have definite articles and as a WC can be used by both sexes, it is neutral (disclaimer: not a rule). However it's more common to say "Wo ist die nächste Toilette?" Sorry, WC is neutral, but this has nothing to do with usage of both sexes. If you want a short explanation of where different (linguistic) gender come from, have a look on http://www.belleslettres.eu/print/genus-gendersprech-v1.pdf (German) p. 3 In a nutshell: Connecting gender with sex is wrong. Correlation is not causality. Sorry for being a smartass. I just have to.
Re: Associative Array .byKey / .byValue: Counter and Tuples
On Sunday, 3 April 2016 at 11:17:17 UTC, Mike Parker wrote: On Sunday, 3 April 2016 at 10:59:47 UTC, Q. Schroll wrote: Simple as that, suppose uint[uint] aa; Any range supports carrying an index. Not so does the Range returned by byKey and byValue. foreach (i, k; aa.byKey) { } and foreach (i, v; aa.byValue) { } both don't compile. That's incorrect. Only Random Access Ranges are indexable. The ranges returned by aa.byKey and aa.byValue are simply Input Ranges. Moreover, ranges do not by default allow for an index value in a foreach loop. That only works out of the box with arrays. It looks like I've used Random Access Ranges so far (without realizing that matters that much). To get the same for a range, you can use std.range.enumerate: import std.range : enumerate; foreach(i, k; aa.byKey.enumerate) {} Thanks. That solves the problem. Reason (I found out by chance): If the key or value type is a std.typecons.Tuple, iteration over aa.by* decomposes the Tuple if there is the right number of arguments. For 2-Tuples, there cannot be both possible. alias Tup = Tuple!(int, int); int[Tup] it; Tup[int] ti; foreach (x, y; it.byKey) { } foreach (x, y; ti.byValue) { } Why is this undocumented? http://dlang.org/spec/hash-map.html doesn't mention Tuples at all! D's associative arrays don't know anything about Tuples, so there's no reason for the aa docs to talk about them. This behavior comes from how std.typecons.Tuple is implemented. Why is this useful? Anyone can decompose the Tuple with .expand if they like. I would prefer allowing an index. If you look at the source of Tuple, alias this is used on .expand, which is likely why they are automatically decomposed in an aa. I know about the alias expand this in Tuple, but simply didn't expect it can go that far. On the other hand, it sounds plausible. I never expected a special preference of AAs for Tuples. It's very interesting that alias this can work like this.
Associative Array .byKey / .byValue: Counter and Tuples
Simple as that, suppose uint[uint] aa; Any range supports carrying an index. Not so does the Range returned by byKey and byValue. foreach (i, k; aa.byKey) { } and foreach (i, v; aa.byValue) { } both don't compile. Reason (I found out by chance): If the key or value type is a std.typecons.Tuple, iteration over aa.by* decomposes the Tuple if there is the right number of arguments. For 2-Tuples, there cannot be both possible. alias Tup = Tuple!(int, int); int[Tup] it; Tup[int] ti; foreach (x, y; it.byKey) { } foreach (x, y; ti.byValue) { } Why is this undocumented? http://dlang.org/spec/hash-map.html doesn't mention Tuples at all! Why is this useful? Anyone can decompose the Tuple with .expand if they like. I would prefer allowing an index. If it does not meet the spec, is it a bug then?
Re: Could we reserve void[T] for builtin set of T ?
On Friday, 1 April 2016 at 08:52:40 UTC, Q. Schroll wrote: The methods add and remove return bool values that indicate the state being changed: • addreturns true iff key has not been already present. • remove returns true iff key has been already present. Should have been The methods add and remove return bool values that indicate the state being changed: • addreturns true iff element has not been already present. • remove returns true iff element has been already present. as we shouldn't call the elements of a set its "keys".
Re: Could we reserve void[T] for builtin set of T ?
On Friday, 1 April 2016 at 09:55:58 UTC, cym13 wrote: On Friday, 1 April 2016 at 08:52:40 UTC, Q. Schroll wrote: [...] I most of what is said here, assigning true or false makes for an aweful API compared to add() and remove(). I agree with Adam Ruppe that if we are to use AA-like syntax we have to keep a coherent API. I don't like the AA-like syntax also. But in some sense it is straightforward. Look at s[x, y] = true; You can have a overload of add making s.add(x, y) do that. But wait, add has a return value. Let x be already present in s and y not. What would you expect s.add(x, y) to return? This is unclear and ambiguous. In this case, I find the AA-like syntax nicer. I've read through the thread without exactly tracking who said what. On Thursday, 31 March 2016 at 20:11:39 UTC, Adam D. Ruppe wrote: aa[x] returns void, which, having no value, would be a compile error. The idea is great and I've adopted it in some sense. I don't know the API, so I cannot tell if something slightly hurts it. Is the add-funtion coherent API? Also, I don't like join etc... Please, just take the python semantics. Interestingly, I've never seen meaningful Python code -- you can believe it or not. Most of the proposal is very near to Python, right? The link let's me at least believe it is. Actually this proves the stuff is natural to people. Unfortunately, union is a keyword in D, so we just can't use it. You can propose another name for set union if you are dissatisfied with join. I don't stick very much to that. Syntax highlighting does a good job here to tell someone union will likely not compile. Is this all? What does your "etc." mean? Sets have been a builtin type for a long time now in that language and they just make sense, they are very polished. Sounds like we should have sets too and profit from the experience. Not to mention that many people that expect sets to be part of the language itself seem to come from python.
Re: Could we reserve void[T] for builtin set of T ?
On Thursday, 31 March 2016 at 19:57:50 UTC, Walter Bright wrote: aa[x] = true; // add member x aa[x] = false; // remove member x x in aa; // compile error On Friday, 1 April 2016 at 02:36:35 UTC, Jonathan M Davis wrote: Still, while it's true that aa.remove is how you'd normally do it, I think that Walter's suggestion of assigning true or false makes by far the most sense of the ones made thus far - and you could just make aa.remove(key); and aa[key] = false; equivalent for void[T] to make it more consistent. - Jonathan M Davis The basic idea is great. But how could the details look like? What do we want in detail? We want simple and suggestive operations on sets, like modification of single elements, iteration, union, intersection, (relative) complement, cartesian product, pointwise function application, filtering and much more. Maybe even suggestive set comprehension is possible. About the special case: We encounter (yet another) special case for something built of void. This is not a major deal. Everyone knows (should know) that void is a special case nearly everywhere it emerges: • void returning functions. • void* is much different from any other pointer. • void[] and void[n] are different from usual arrays. Now we add • void[T] is different from K[T] (for K != void). From the view of a D programmer, this is not a big deal to accept. From the view of a D learner, this is yet another void special case, maybe even easier to fully understand than void*. First of all, we do not use the term associative array for sets. This is wrong and confusing at least for the beginners. We can have AA declaration syntax without AA indexing syntax. The set indexing syntax will be a bit different. void[T] s; // declare s as a set of T s.add(x);// addx of type T s.remove(x); // remove x of type T auto r1 = x in s; // r1 is of type bool; r1 == true iff x is --- in s. auto r2 = x !in s; // r2 is of type bool; r2 == true iff x is not in s. Further we allow not only for convenience s[x] = true; // Same side effect of add. s[x] = false; // Same side effect of remove. This is not AA indexing syntax so let's call it set indexing and nothing else. It looks similar to bool[T] syntax, but a void[T] is different from bool[T] by design and idea. The methods add and remove return bool values that indicate the state being changed: • addreturns true iff key has not been already present. • remove returns true iff key has been already present. The opIndexAssign should return the assigned bool; everything else would be totally unexpected. It is a bit like assigning the expression x in s which is an rvalue. It is illegal to use the opIndex with parameters. The only legal indexing expressions will be • s[]: legally used to operate on the set pointwise (see later). • s[x] = b • s[x] op= b, where op is one of |, &, ^: s[x] op= b does s[x] = (x in s) op b. I don't see an application of the latter, but there is no reason to disallow it; rather discourage it. When assigning bool literals with set indexing syntax where the value of the assignment is not used, the compiler should emit a warning and suggest using add or remove respectively. bool b = expression(); s[x] = true; // compiler suggests using add. s[x] = false;// compiler suggests using remove. s[x] = b;// ok, b is not a literal. s[x] = expression(); // ok. s[x] = s[y] = true; // ok; for s[y] = true, the value (true) is being used, // for s[x] we have s[y] = true as expression. Known from AAs we also will have • sizeof • length • dup • rehash • clear in the expected form, but we won't have • keys • values • byKey() • byValue() That is because to be it is odd to call the elements keys. And what are the values then? We don't even need these. New/changed ones: • singelton (new) • get (known from AA, but other sematics) For a singelton set, singelton returns the only element. Otherwise RangeError. get returns a pointer to the only element of a singelton set or null for the empty set. If the set contains more than one element, RangeError. get can be useful with if (auto xp = s.get) { /+ use the unique element x = *p +/ } else { /+ empty set handling +/ } [Aside: There is no add for AAs for a good reason.] Iteration: foreach (ref x; s) // ref is optional! { ... } Because the pseudo index type is void, there are no index values. So indexed iteration is illegal. foreach (i, ref x; s) // compile-time error. { ... } Sets cannot be lockstepped over; that would need canoncial order. But it makes sense to chain sets etc. Initialization: The set can also be initialized by a value of T, T[] or bool[T]. void[T] s; // makes empty set. void[T] s = x; // x of type T: Makes singleton set. v
opApply and opApplyReverse
One upon time, we decided that we can replace opNeg, opCom, etc. and opAdd, opSubtract, etc. by generic names opUnary and opBinary. Why don't we have a single iteration operator? Can't we just provide the "Reverse" information by some bool argument (or string if you like maybe more functionality in the future) like the other operators? I'd suppose to make opApply and opApplyReverse aliases to !false / !"Forward" and !true / !"Reverse" for compatibility. The workaround I use: static pure opApp(bool rev)() { import std.format : format; immutable code = q{ int opApply%s(scope int delegate(ref Idcs) dg) { ... (with %s everywhere something Reverse-generic happens) } } return rev ? code.format("Reverse", ...) : code.format("",...); } mixin(opApp!false); mixin(opApp!true);