Re: dcaflib, unix terminal
On Saturday, 17 March 2012 at 03:00:36 UTC, Nathan M. Swan wrote: In a post from a few weeks ago, someone mentioned terminal colors. Currently, I have one that works with bash (cmd pending) at https://github.com/carlor/dcaflib. Example code: import dcaflib.ui.terminal; import std.stdio; void main() { fgColor = TermColor.RED; writeln(this is red!); fgColor = TermColor.BLUE; writeln(this is blue!); } This worked for me with Ubuntu. Though I had to use rdmd instead of dmd. I'm using a version like this for Windows for one of my programs. I've found with unix OS's you can't edit text very nice at all using readln(); etc. Windows doesn't have that problem. I don't even bother with stuff because of it. -Joel
Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}
Oh and also, changing version(linux) with version(Posix) for the color output management would be great. ( I'm on FreeBSD and was wondering why I had no colors as advertised :} ).
GDC goes github
Morning All, I have created a new GDC project on github, where I hope people will help contribute and continue development of the compiler there. https://github.com/gdc-developers I've been told to cue Walter asking to rename the organisation to D-Programming-GDC. :o) I have also bought a new server, and will be getting a site up in due course. http://dgnu.org/ http://gdcproject.org/ Regards Iain.
Re: GDC goes github
On 18-03-2012 13:39, Iain Buclaw wrote: Morning All, I have created a new GDC project on github, where I hope people will help contribute and continue development of the compiler there. https://github.com/gdc-developers I've been told to cue Walter asking to rename the organisation to D-Programming-GDC. :o) I have also bought a new server, and will be getting a site up in due course. http://dgnu.org/ http://gdcproject.org/ Regards Iain. Great news! This will make it a lot easier to send patches. -- - Alex
Re: GDC goes github
On 3/18/12 7:39 AM, Iain Buclaw wrote: I have created a new GDC project on github, where I hope people will help contribute and continue development of the compiler there. https://github.com/gdc-developers I've been told to cue Walter asking to rename the organisation to D-Programming-GDC. :o) I have also bought a new server, and will be getting a site up in due course. http://dgnu.org/ http://gdcproject.org/ This is awesome! Andrei
StackOverflow Chat Room
Hey guys, I made a StackOverflow chat room. You don't have to use it or anything, but at least it exists now. Its called Dlang, http://chat.stackoverflow.com/rooms/9025/dlang -- James Miller
Re: GDC goes github
On 3/18/2012 5:39 AM, Iain Buclaw wrote: I've been told to cue Walter asking to rename the organisation to D-Programming-GDC. :o) Done.
Re: GDC goes github
On Monday, 19 March 2012 at 00:22:48 UTC, Walter Bright wrote: On 3/18/2012 5:39 AM, Iain Buclaw wrote: I've been told to cue Walter asking to rename the organisation to D-Programming-GDC. :o) Done. I must admit I did not anticipate you creating a new repository. Thanks! I'll be sure to move the bits I need across.
The definition of templates in D
A template is a parameterized namespace. That is, it is a namespace (a name through which other objects can be accessed) that may be passed parameters that can modify the nature of the stuff inside. If a template is a compile-time function, then the equivalent of a function call - the association of a template description with specific arguments - is called a template instantiation. Templates have the following properties: * they're unique; that is, a member of a template instantiation always refers the same as a member of the same instantiation in a different module * they're compiletime; that is, it is impossible to instantiate a template while the program runs. * In D, if a template contains only one member, and its name is the same as the template, then the member is assumed to *be* the template instantiation. That's all! In D, void foo(T)(T t) { } is just short for template foo(T) { void foo(T t) { } }. So foo!(int) == member foo of instantiation of template foo with parameter T = int. There's a shortcut for this, called IFTI, implicit function template instantiation. If you have a function template - that is, a template containing only one function with the same name as the template - then calling the template as if it was a function will simply instantiate it with the type of the arguments. Example: template bar(T...){ void bar(T t) { writefln(t); } } bar(2, 3, 4); // is equivalent to bar!(int, int, int)(2, 3, 4); // is equivalent to bar!(int, int, int).bar(2, 3, 4);
Re: Issue 7670
On Sun, Mar 18, 2012 at 05:33:21AM +0100, F i L wrote: [...] Is UFCS planned for 2.059? [...] It's already checked into git. So it will be in 2.059 for sure. T -- Let's call it an accidental feature. -- Larry Wall
Re: Array operation a1 + a2 not implemented!
On Sat, Mar 17, 2012 at 11:37:15PM -0500, Caligo wrote: void main() { float[4] a1 = [1, 2, 3, 4]; float[4] a2 = [3, 2, 8, 2]; auto r = a1 + a2; } When are they going to be implemented? Are you trying to concatenate the arrays or sum their elements? Here's how to concatenate: auto r = a1 ~ a2; Here's how to sum: auto r = a1[] + a2[]; T -- Klein bottle for rent ... inquire within. -- Stephen Mulraney
Re: Array operation a1 + a2 not implemented!
On Sun, Mar 18, 2012 at 1:18 AM, H. S. Teoh hst...@quickfur.ath.cx wrote: Are you trying to concatenate the arrays or sum their elements? Here's how to concatenate: auto r = a1 ~ a2; Everybody knows about concatenation. Here's how to sum: auto r = a1[] + a2[]; With the latest 2.059 I'm getting 'Error: Array operation a1[] + a2[] not implemented'
Re: Understanding Templates: why can't anybody do it?
On Sat, 17 Mar 2012 15:39:53 -0700 H. S. Teoh hst...@quickfur.ath.cx wrote: I don't think you can discard OOP entirely. It still has its place IMO. When you need runtime polymorphism, OOP is still the best tool for the job. Hmm, if we want to write more FP-like type-safe code, I wonder how much we'd need runtime polymorphism at all? Sincerely, Gour -- Never was there a time when I did not exist, nor you, nor all these kings; nor in the future shall any of us cease to be. http://atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810 signature.asc Description: PGP signature
Re: Ideas for Phobos., u++ STL speed comparison
On Friday, 16 March 2012 at 14:26:19 UTC, Andrei Alexandrescu wrote: On 3/16/12 8:39 AM, Jay Norwood wrote: On Friday, 16 March 2012 at 13:20:39 UTC, Jay Norwood wrote: btw, the u++ page claims in the link below to be faster than D by 70% on some operations, which they attribute to their STL rewrite. Maybe someone should take a look at what they've done. Or maybe this comparison is out of date... http://www.ultimatepp.org/www$uppweb$vsd$en-us.html The test uses a specific data structure, an indexed contiguous array. To conclude from here that C++ is faster than D is quite a stretch. Andrei Ok, but maybe the the upp arrayMap is pretty efficient for certain things ... by their benchmarks 4x faster than STL on whatever they were doing. I tried rewriting the D example code, and upp is consistently a bit faster when running on a single core. http://www.ultimatepp.org/src$Core$ArrayMap$en-us.html A D std.parallelism library rewrite of the example runs about 2x faster than the current upp example code on a corei7 box, if you give it several files to work on. The execution doesn't scale as much as I expected, probably because the dictionary gets duplicated in the parallel case, while the single thread just increments counts in the same dictionary. I believe the TDPL book mentioned some research on non-locking, shared memory containers, but I didn't see anything documented in the D libraries. There is the workerLocalStorage area ... but it wouldn't help with the problem of the dictionary getting duplicated in this case. It look like there would be a reduce step required to merge the dictionary counts at the end.
Re: Array operation a1 + a2 not implemented!
Caligo wrote: With the latest 2.059 I'm getting 'Error: Array operation a1[] + a2[] not implemented' Must be a bug with 2.059, works in 2.058. But, why aren't there operators for arrays without having to specify []? int[] a, b; int[] r = a + b; Pointer arithmetic maybe?
Re: Issue 7670
H. S. Teoh wrote: It's already checked into git. So it will be in 2.059 for sure. Great!
Re: The definition of templates in D
On Sun, 18 Mar 2012 17:01:00 +1100, FeepingCreature default_357-l...@yahoo.de wrote: There's a shortcut for this, called IFTI, implicit function template instantiation ... What would be useful is ... template bar(T...){ void bar(T t) { writefln(t); } } int a,b,c; bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); -- Derek Parnell Melbourne, Australia
Re: virtual-by-default rant
On 3/18/2012 12:27 PM, bearophile wrote: F i L: I'm a bit confused. Reading through the virtual function's docs (http://dlang.org/function.html#virtual-functions) it says: All non-static non-private non-template member functions are virtual. This may sound inefficient, but since the D compiler knows all of the class hierarchy when generating code, all functions that are not overridden can be optimized to be non-virtual. This is so much theoretical that I think this should be removed from the D docs. And to be put back when one DMD compiler is able to do this. Otherwise it's just false advertising :-) Bye, bearophile It says can be optimized, not are optimized. Big difference.
Re: Proposal: user defined attributes
On 3/17/2012 10:01 PM, F i L wrote: Walter Bright wrote: My impression is it is just obfuscation around a simple lazy initialization pattern. While I can see the abstraction usefulness of compile time attribute metadata, I am having a hard time seeing what the gain is with runtime attributes over more traditional techniques. I'm not sure exactly what you mean by runtime attributes. Do you mean the ability to reflect upon attributes at runtime? I mean there is modifiable-at-runtime, instance-specific data. In that case, there's two benefits I can think of over traditional member data: Instance memory (attributes are create at reflection), Lazy initialization is a standard pattern. No special language features are needed for it. and reusable metadata packages (similar to how you might use mixin templates only with less noise and reflection capabilities). Sounds like a garden variety user-defined data type. Again, I am just not seeing the leap in power with this. It's a mystery to me how a user defined attribute class is different in any way from a user defined class, and what advantage special syntax gives a standard pattern for lazy initialization, and why extra attribute fields can't be just, well, fields.
Re: Proposal: user defined attributes
On 3/17/2012 8:12 PM, Adam D. Ruppe wrote: Walter, how do you feel about the compile time annotation list? I feel it has merit, but I think it is possibly more complex than necessary. Note that I have never used user defined attributes, so I don't have experience to guide me on this.
Re: The definition of templates in D
On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); }
Re: The definition of templates in D
Andrej Mitrovic andrej.mitrov...@gmail.com wrote in message news:mailman.851.1332059038.4860.digitalmar...@puremagic.com... On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); } Shouldn't that be: template bar(T...) { void bar(T args) { writeln(args); } } void main() { int a = 1, b = 2, c = 3; bar(a, b, c); } Or did I misunderstand the point?
Re: force inline/not-inline
Am 17.03.2012 23:53, schrieb Manu: I just started writing an emulator in D for some fun; I needed an application to case-study aggressive performance characteristics in hot-loop situations. I know this has come up time and time again, but I just want to put it out there again... if I were shipping this product, I would NEED forceinline + force-not-inline. I know D likes to try and intelligently inline code, but in these very high performance cases, I know what's best for my code, and I have also shipped this product commercially before. I know exactly what was required of it from months of meticulous performance profiling, and I can see immediately that D is not making the right choices. Programmers need to be able to explicitly control to inline-ing in many cases. Cross module inline-ing is a really big problem. What is the plan here? Is there a solution in the foreseeable future? What about libs? +1 Currently I use string mixins to force inlining - but that's uggly
Re: virtual-by-default rant
On Sunday, 18 March 2012 at 03:27:40 UTC, bearophile wrote: F i L: All non-static non-private non-template member functions are virtual. This may sound inefficient, but since the D compiler knows all of the class hierarchy when generating code, all functions that are not overridden can be optimized to be non-virtual. This is so much theoretical that I think this should be removed from the D docs. And to be put back when one DMD compiler is able to do this. Otherwise it's just false advertising :-) Is this even possible without LTO/WPO? Extending a class defined in a library you link in (and for which codegen already happened) is certainly possible… David
Re: Proposal: user defined attributes
Walter Bright wrote: I mean there is modifiable-at-runtime, instance-specific data. In C#, no there isn't. Attributes are simply objects constructed (when gotten) from an Entity's metadata. No memory is stored per-instance unless you manage the objects manually: class A : Attribute { public string s = Default; } [TestA] class C {} static void Main() { // The line below is equivalent to: var a = new A(); // except that it's construction is defined by // metadata stored in type C. var a = typeof(C).GetCustomAttributes(true)[0] as A; a.s = Modification; Console.WriteLine(a.s); // prints Modification // Therefor... var b = typeof(C).GetCustomAttributes(true)[0] as A; Console.WriteLine(b.s); // prints Default } In that case, there's two benefits I can think of over traditional member data: Instance memory (attributes are create at reflection), Lazy initialization is a standard pattern. No special language features are needed for it. I see how my statements (and code examples) where confusing. I meant that no attribute data is stored per-instance at all (unless traditionally done so), and that attribute objects are simply create in-place at the point of access. So to clarify my previous code a bit: attribute class A { string i = Default; } @A class C { A a; } void main() { auto a = C@A; // create new A based on C assert(is(typeof(a) : A)); // alternatively you could do: auto c = new C(); auto a = c@A; // same as: typeof(c)@A c.a = c@A; // explicitly store attribute } Note: Might want to use the new keyword with class type attributes (auto a = new C@A), but the idea's there. Plus, I think that looks a lot better than the C# version. and reusable metadata packages (similar to how you might use mixin templates only with less noise and reflection capabilities). Sounds like a garden variety user-defined data type. It is. Only it's a data type who's construction values are stored in metadata (per entity), and therefor can be used at both compile and run times. By per-entity I mean for each unique Type, Type member, Sub-Type, etc. I don't know of any existing D idiom that is capable of what I presented. If there is, I would like to know. Here's the closest I can think of: mixin template CoolInt(bool cool, string name) { mixin(enum bool ~ name ~ _isCool = cool;); mixin(int ~ name ~ ;); } class CoolClass { mixin CoolInt!(true, a); mixin CoolInt!(false, b); } void main() { auto c = new CoolClass(); writeln(c.a_isCool); // true writeln(c.b_isCool); // false } which, aside from noise, is great for runtime reflection, but it's completely useless (i think) for the compiler because the variables are created through arbitrary strings. Plus, I don't know how you'd store anything but simple variables, which more complex data would require a lot of entity_variables.
Re: Understanding Templates: why can't anybody do it?
Am 18.03.2012 07:41, schrieb Gour: On Sat, 17 Mar 2012 15:39:53 -0700 H. S. Teohhst...@quickfur.ath.cx wrote: I don't think you can discard OOP entirely. It still has its place IMO. When you need runtime polymorphism, OOP is still the best tool for the job. Hmm, if we want to write more FP-like type-safe code, I wonder how much we'd need runtime polymorphism at all? Sincerely, Gour Enterprise software?
Re: The definition of templates in D
On Sun, 18 Mar 2012 19:16:02 +1100, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); } Almost, but more like this ... template add(X,Y,Z) { X add(Y a, Z b) { return cast(X) (cast(X)a + cast(X)b); } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!(u,s); writefln( %s %s %s, s,t, u ); } This currently errors with ... Error: template instance add!(u,s) add!(u,s) does not match template declaration add(X,Y,Z) -- Derek Parnell Melbourne, Australia
Re: Proposal: user defined attributes
F i L wrote: Plus, I don't know how you'd store anything but simple variables, which more complex data would require a lot of entity_variables. I found a way: mixin template Attribute( string type, string name, string attr, Params...) { mixin (type ~ ~ name ~ ;); mixin ( auto ~ name ~ _ ~ attr ~ () ~ { return new ~ attr ~ (Params); } ); } class Cool { string s; this(string s) { this.s = s; } } class CoolClass { mixin Attribute!(int, a, Cool, Heh); mixin Attribute!(int, b, Cool, Sup); } void main() { auto c = new CoolClass(); writeln(c.a, , , c.b); // 0, 0 writeln(c.a_Cool().s); // Heh writeln(c.b_Cool().s); // Sup }
Re: Issue 7670
On 18.03.2012 6:50, bearophile wrote: I need to attract a bit of your attention to this: http://d.puremagic.com/issues/show_bug.cgi?id=7670 Currently this (very nice) code gets accepted even with the -property compiler switch: [1, 2, -3, 4].filter!(x = x 0).map!(x = x ^^ 0.2).writeln; I fail to see how property should affect this code in any way, no '=' anywhere in sight. UFCS is indeed nice. -- Dmitry Olshansky
Re: The definition of templates in D
On 03/18/12 11:29, Derek wrote: On Sun, 18 Mar 2012 19:16:02 +1100, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); } Almost, but more like this ... template add(X,Y,Z) { X add(Y a, Z b) { return cast(X) (cast(X)a + cast(X)b); } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!(u,s); writefln( %s %s %s, s,t, u ); } This currently errors with ... Error: template instance add!(u,s) add!(u,s) does not match template declaration add(X,Y,Z) why would you do that what do you want to _do_ it sounds like you're frantically trying to nail templates into a shape that they really really really aren't meant for in any case what is wrong with auto add(T)(T t) { return t[0] + t[1]; }
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 10:25:20 UTC, F i L wrote: F i L wrote: class CoolClass { mixin Attribute!(int, a, Cool, Heh); mixin Attribute!(int, b, Cool, Sup); } void main() { auto c = new CoolClass(); writeln(c.a, , , c.b); // 0, 0 writeln(c.a_Cool().s); // Heh writeln(c.b_Cool().s); // Sup } Is it not possible to alias a mixin to just one letter, and then use it to have any syntax we want... something like this: x(@attribute(Serializable.yes) int a);
Re: The definition of templates in D
On 03/18/12 11:36, FeepingCreature wrote: On 03/18/12 11:29, Derek wrote: On Sun, 18 Mar 2012 19:16:02 +1100, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); } Almost, but more like this ... template add(X,Y,Z) { X add(Y a, Z b) { return cast(X) (cast(X)a + cast(X)b); } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!(u,s); writefln( %s %s %s, s,t, u ); } This currently errors with ... Error: template instance add!(u,s) add!(u,s) does not match template declaration add(X,Y,Z) why would you do that what do you want to _do_ it sounds like you're frantically trying to nail templates into a shape that they really really really aren't meant for in any case what is wrong with auto add(T)(T t) { return t[0] + t[1]; } oh you may have misunderstood me a template is a **compile time parameterized namespace** its parameters are **types** and **constants**, not runtime values add is a namespace that is instantiated with the types float and H I get what you want. :D template add(T) { template add(U...) { auto add(U u) { T res; foreach (value; u) res += value; return res; } } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!int(u, s); writefln( %s %s %s, s, t, u ); }
Re: The definition of templates in D
On 03/18/12 11:39, FeepingCreature wrote: On 03/18/12 11:36, FeepingCreature wrote: On 03/18/12 11:29, Derek wrote: On Sun, 18 Mar 2012 19:16:02 +1100, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); } Almost, but more like this ... template add(X,Y,Z) { X add(Y a, Z b) { return cast(X) (cast(X)a + cast(X)b); } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!(u,s); writefln( %s %s %s, s,t, u ); } This currently errors with ... Error: template instance add!(u,s) add!(u,s) does not match template declaration add(X,Y,Z) why would you do that what do you want to _do_ it sounds like you're frantically trying to nail templates into a shape that they really really really aren't meant for in any case what is wrong with auto add(T)(T t) { return t[0] + t[1]; } oh you may have misunderstood me a template is a **compile time parameterized namespace** its parameters are **types** and **constants**, not runtime values add is a namespace that is instantiated with the types float and H I get what you want. :D template add(T) { template add(U...) { auto add(U u) { T res; foreach (value; u) res += value; return res; } } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!int(u, s); writefln( %s %s %s, s, t, u ); } which of course doesn't work because you can't add a double to an int. So .. maybe I don't get what you want.
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 10:38:19 UTC, Tove wrote: On Sunday, 18 March 2012 at 10:25:20 UTC, F i L wrote: F i L wrote: class CoolClass { mixin Attribute!(int, a, Cool, Heh); mixin Attribute!(int, b, Cool, Sup); } void main() { auto c = new CoolClass(); writeln(c.a, , , c.b); // 0, 0 writeln(c.a_Cool().s); // Heh writeln(c.b_Cool().s); // Sup } Is it not possible to alias a mixin to just one letter, and then use it to have any syntax we want... something like this: x(@attribute(Serializable.yes) int a); Sure, but there's still the issue of using attributes for codegen. For instance compare: struct Test { @GC.NoScan int value; } to, the current: struct Test { int value; this() { GC.setAttr(value, NO_SCAN); } } How can we do that with mixin templates? If attributes where a language type the compiler could exploit in a consistent way, It would be *trivial* describing this behavior in a declarative way.
Re: The definition of templates in D
On Sun, 18 Mar 2012 21:40:10 +1100, FeepingCreature default_357-l...@yahoo.de wrote: On 03/18/12 11:39, FeepingCreature wrote: On 03/18/12 11:36, FeepingCreature wrote: On 03/18/12 11:29, Derek wrote: On Sun, 18 Mar 2012 19:16:02 +1100, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: On 3/18/12, Derek ddparn...@bigpond.com wrote: What would be useful is ... bar!(a, b, c); // is equivalent to bar!(int, int, int).bar(a, b, c); You mean like this? template bar(T...) { void bar() { writeln(T); } } void main() { int a = 1, b = 2, c = 3; bar!(a, b, c); } Almost, but more like this ... template add(X,Y,Z) { X add(Y a, Z b) { return cast(X) (cast(X)a + cast(X)b); } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!(u,s); writefln( %s %s %s, s,t, u ); } This currently errors with ... Error: template instance add!(u,s) add!(u,s) does not match template declaration add(X,Y,Z) why would you do that what do you want to _do_ it sounds like you're frantically trying to nail templates into a shape that they really really really aren't meant for in any case what is wrong with auto add(T)(T t) { return t[0] + t[1]; } oh you may have misunderstood me a template is a **compile time parameterized namespace** its parameters are **types** and **constants**, not runtime values add is a namespace that is instantiated with the types float and H I get what you want. :D template add(T) { template add(U...) { auto add(U u) { T res; foreach (value; u) res += value; return res; } } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; t = add!int(u, s); writefln( %s %s %s, s, t, u ); } which of course doesn't work because you can't add a double to an int. So .. maybe I don't get what you want. The 'adding' is not the point; it could be any functionality. The point I was trying to get across was that it would be useful if the compiler could infer the type parameters of a template instantiation from the types of the data items used in the instantiation reference. My original code would work if I had of written ... t = add!(int, ulong, double)(u, s); but I was thinking that coding (int, ulong, double) is a bit redundant as this information is available to the compiler already as the arguments' types. And by the way, none of the counter examples so far would compile for me. Still complaining about add!(u,s) does not match template declaration ... -- Derek Parnell Melbourne, Australia
Re: The definition of templates in D
On Sun, 18 Mar 2012 21:36:46 +1100, FeepingCreature default_357-l...@yahoo.de wrote: why would you do that To make coding easier to write AND read. what do you want to _do_ Infer template arguments from the data types presented in the data values supplied on the instantiation statement. it sounds like you're frantically trying to nail templates into a shape that they really really really aren't meant for I assumed D templates were a type of template; a model for real runnable code that the compiler can instantiate based on the arguments supplied. in any case what is wrong with auto add(T)(T t) { return t[0] + t[1]; } It doesn't work. -- Derek Parnell Melbourne, Australia
Re: virtual-by-default rant
On 18 March 2012 04:47, F i L witte2...@gmail.com wrote: I'm a bit confused. Reading through the virtual function's docs ( http://dlang.org/function.**html#virtual-functionshttp://dlang.org/function.html#virtual-functions) it says: All non-static non-private non-template member functions are virtual. This may sound inefficient, but since the D compiler knows all of the class hierarchy when generating code, all functions that are not overridden can be optimized to be non-virtual. So if all functions are automatically optimized to non-virtual where applicable, then the final keyword is for conceptual access limitation only. This makes a lot of sense to me. Is there something I'm not getting that makes you want an explicit virtual keyword? It's not dependable. Virtually everything meets those criteria and will be virtual, but I want to be confident that NOTHING is EVER virtual, unless I absolutely say so. D knows nothing about the class hierarchy when generating code, I don't know how it can make that claim? Anything that's not private can be extended by another module, and only the linker could ever know out about that. Aside from that, I want a compile error if someone tries to randomly override stuff. virtuals are a heinous crime, and should only be used explicitly. It should not be possible for someone to accidentally create a virtual.
Re: Proposal: user defined attributes
F i L wrote: struct Test { int value; this() { GC.setAttr(value, NO_SCAN); } } bleh, should be... struct Test { int value; this(int v) { GC.setAttr(value, GC.BlkAttr.NO_SCAN); } } Or something like that. I've never actually set GC attributes before. But this also raises another issue. Structs don't have default constructors, so applying NoScan attributes (by default) is, to my knowledge, impossible. Whereas it could (?) be possible through attributes.
Re: Array operation a1 + a2 not implemented!
On 03/18/2012 05:37 AM, Caligo wrote: void main() { float[4] a1 = [1, 2, 3, 4]; float[4] a2 = [3, 2, 8, 2]; auto r = a1 + a2; } When are they going to be implemented? I don't know, but this works: void main() { float[4] a1 = [1, 2, 3, 4]; float[4] a2 = [3, 2, 8, 2]; float[4] r = a1[] + a2[]; }
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 10:50:14 UTC, F i L wrote: x(@attribute(Serializable.yes) int a); Sure, but there's still the issue of using attributes for codegen. For instance compare: struct Test { @GC.NoScan int value; } to, the current: struct Test { int value; this() { GC.setAttr(value, NO_SCAN); } } How can we do that with mixin templates? If attributes where a language type the compiler could exploit in a consistent way, It would be *trivial* describing this behavior in a declarative way. Hmm... well if the x declarations store all NoScan objects in a collection, it could be injected into the constructor token stream later... x!(@GC.NoScan int value;); // modified by x to insert a foreach with GC.setAttr x!(q{this() {/* foreach(...) GC.setAttr(...); */ }); But I guess one lose the opportunity for some compile time magic... in a more efficient way than exposed by the GC.settAttr API.(just pure speculation, I don't have sufficient knowledge of the internal representation of our GC design.).
Re: The definition of templates in D
On Sun, 18 Mar 2012 22:00:06 +1100, Derek ddparn...@bigpond.com wrote: The 'adding' is not the point; it could be any functionality. The point I was trying to get across was that it would be useful if the compiler could infer the type parameters of a template instantiation from the types of the data items used in the instantiation reference. The best I can come up with so far is ... import std.stdio; template add(X, Y,Z) { auto add(X c, Y a, Z b) { return cast(X)a + cast(X)b; } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; writeln(add(t,u,s)); // -- 467 writeln(add(s,t,s)); // -- 124.23 } It seems that the templated function's return type is not used when searching for matching templates, so I have to explicitly include something in the function's signature just to use it as the returned data's type. -- Derek Parnell Melbourne, Australia
Re: virtual-by-default rant
On 18 March 2012 06:42, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 3/17/12 9:24 PM, Manu wrote: Yeah, I'm not really into that. I group things conceptually. Either way, I've never written a class where non-virtuals don't outweigh virtuals in the realm of 20:1. Then probably struct is what you're looking for. No, I definitely want a class. ref type, gc mem, etc. struct doesn't support virtual at all. I have 2 virtuals, this particular class has around 50 public methods, almost all of which are trivial accessors, called extremely heavily in hot loops. More similar classes to come. I've never in 15 years seen a large-ish class where the majority of methods are virtual. Who writes code like that? It's never come up in my industry at least. Maybe you'll occasionally see it in a small interface class, but D has real interfaces... On 18 March 2012 11:00, David Nadlinger s...@klickverbot.at wrote: Is this even possible without LTO/WPO? Extending a class defined in a library you link in (and for which codegen already happened) is certainly possible… It's not possible without LTO, which is crazy. Depending on an advanced optimiser to generate the most basic code is a clear mistake. I think we just need the ability to state 'final:' and mark explicit 'virtual's, the problem is mitigated without breaking the language. I can live with a policy where everyone is instructed to write code that way.
Re: Issue 7670
On Sunday, 18 March 2012 at 10:32:59 UTC, Dmitry Olshansky wrote: On 18.03.2012 6:50, bearophile wrote: […] [1, 2, -3, 4].filter!(x = x 0).map!(x = x ^^ 0.2).writeln; I fail to see how property should affect this code in any way, no '=' anywhere in sight. UFCS is indeed nice. Calling parameterless functions without parens? David
Re: force inline/not-inline
On 18 March 2012 10:56, Adrian adrian.remove-nos...@veith-system.de wrote: +1 Currently I use string mixins to force inlining - but that's uggly Yeah, that's not an acceptable workaround. I couldn't write commercial/large-team code that way.
Re: The definition of templates in D
On 18.03.2012 15:34, Derek wrote: On Sun, 18 Mar 2012 22:00:06 +1100, Derek ddparn...@bigpond.com wrote: The 'adding' is not the point; it could be any functionality. The point I was trying to get across was that it would be useful if the compiler could infer the type parameters of a template instantiation from the types of the data items used in the instantiation reference. The best I can come up with so far is ... [snip] Why not this: template add(X) { auto add(Y,Z)(Y a, Z b) { return cast(X)a + cast(X)b; } } void main() { double s; int t; ulong u; s = 1.23; t = 123; u = 456; writeln(add!int(u,s)); // -- 467 writeln(add!double(t,s)); // -- 124.23 } In short: use type parameters when you parametrize on types. Use IFTI to avoid typing parameters that could be inferred. X type you cast to, clearly can't be inferred from arguments. It seems that the templated function's return type is not used when searching for matching templates, so I have to explicitly include something in the function's signature just to use it as the returned data's type. No just provide type, dummy values smack of flawed dispatch techniques from C++ STL. -- Dmitry Olshansky
Re: Issue 7670
On 18.03.2012 15:36, David Nadlinger wrote: On Sunday, 18 March 2012 at 10:32:59 UTC, Dmitry Olshansky wrote: On 18.03.2012 6:50, bearophile wrote: […] [1, 2, -3, 4].filter!(x = x 0).map!(x = x ^^ 0.2).writeln; I fail to see how property should affect this code in any way, no '=' anywhere in sight. UFCS is indeed nice. Calling parameterless functions without parens? David Ah, right. It was right there in the plain sight :) -- Dmitry Olshansky
Re: Proposal: user defined attributes
On 18 March 2012 05:04, Kapps opantm2+s...@gmail.com wrote: On Sunday, 18 March 2012 at 01:48:07 UTC, Walter Bright wrote: On 3/17/2012 6:39 PM, Manu wrote: I'm sure C# already answers all these questions. It has precisely the same set of issues associated. C# doesn't have RAII, immutable, nor the notion of threadlocal/shared types. It has threadlocal using the [ThreadLocal] attribute which gets implemented by the compiler. In C#, all attributes live inside the TypeInfo/MethodInfo/etc for the class/struct/method/field/**parameter. I don't see this being a problem. I can't think of good use cases where an attribute/annotation should be per-instance at all, particularly with the compile-time power that D has, whereas C# does not have any. Honestly, most uses of attributes in D would be done at compile-time, and I think that's acceptable until/if runtime reflection is put in. If it is needed, it can live inside TypeInfo/MethodInfo, but is (imo) absolutely not needed on a per-instance basis. This is the job of fields or interfaces. I think that's a fair call. I wonder why Java feels the need for stateful attributes. I have made extensive use of java attributes that have been stateful. Java's persistence api's rely on stateful attributes a lot, but I think that can be done in other ways in D. Compile-time attributes sound like a great start, it's much simpler. But they do need to be able to 'do stuff', ie, add some property/method to a field (using the field its self as the context pointer), not just simply 'tag' it.
Re: virtual-by-default rant
Manu wrote: D knows nothing about the class hierarchy when generating code, I don't know how it can make that claim? How does D not know about class hierarchy when generating code? That doesn't make sense to me. It *has* to know to even generate code. Anything that's not private can be extended by another module, and only the linker could ever know out about that. This shouldn't be an issue: export void method() // virtual export final void method() // non-virtual Aside from that, I want a compile error if someone tries to randomly override stuff. This only really applies if the compiler can't optimize virtuals away. If the compiler was very good, then getting compiler errors would only make extending object structure a pain, IMO. I can see how one programmer might accidentally create a function with the same name as the base classes name, and how that would be annoying. That's why... virtuals are a heinous crime, and should only be used explicitly. It should not be possible for someone to accidentally create a virtual. ...I don't think having a virtual keyword would be a bad thing. Still, I think conceptually saying you _can't_ override this makes more sense than saying you _can_ override this when the biggest reason for using Classes is to build extendable object types. I think at the end of the day both arguments are highly arbitrary. virtual and final keywords could probably exist peacefully, and wouldn't dent the learning curve by much, so I don't have any strong argument against virtual. It's just not the one I'd choose.
Re: virtual-by-default rant
On 2012-03-18 04:27, bearophile wrote: F i L: I'm a bit confused. Reading through the virtual function's docs (http://dlang.org/function.html#virtual-functions) it says: All non-static non-private non-template member functions are virtual. This may sound inefficient, but since the D compiler knows all of the class hierarchy when generating code, all functions that are not overridden can be optimized to be non-virtual. This is so much theoretical that I think this should be removed from the D docs. And to be put back when one DMD compiler is able to do this. Otherwise it's just false advertising :-) I agree that can be misleading. But I think the D docs should be about the D language and not DMD implementation of D. -- /Jacob Carlborg
Re: Proposal: user defined attributes
Tove wrote: Hmm... well if the x declarations store all NoScan objects in a collection, it could be injected into the constructor token stream later... x!(@GC.NoScan int value;); // modified by x to insert a foreach with GC.setAttr x!(q{this() {/* foreach(...) GC.setAttr(...); */ }); But if x!() pumps out a constructor, how do you add multiple attributes with @GC.NoScan? The constructors would collide.
Re: virtual-by-default rant
On 18 March 2012 13:59, F i L witte2...@gmail.com wrote: Manu wrote: D knows nothing about the class hierarchy when generating code, I don't know how it can make that claim? How does D not know about class hierarchy when generating code? That doesn't make sense to me. It *has* to know to even generate code. I mean it can't possibly know the complete 'final' class hierarchy, ie, the big picture. Anything anywhere could extend it. The codegen must assume such. Aside from that, I want a compile error if someone tries to randomly override stuff. This only really applies if the compiler can't optimize virtuals away. If the compiler was very good, then getting compiler errors would only make extending object structure a pain, IMO. I can see how one programmer might accidentally create a function with the same name as the base classes name, and how that would be annoying. That's why... Are you saying someone might accidentally override something that's not virtual? That's what 'override' is for. If a method is final, it is a compile error to override in any way, you either need to make the base virtual, or explicitly 'override' on the spot if you want to do that. virtuals are a heinous crime, and should only be used explicitly. It should not be possible for someone to accidentally create a virtual. ...I don't think having a virtual keyword would be a bad thing. Still, I think conceptually saying you _can't_ override this makes more sense than saying you _can_ override this when the biggest reason for using Classes is to build extendable object types. I see it precisely the other way around. You still need strict control over precisely HOW to extend that thing. The virtual methods are the exception, not the common case. Explicit virtual even gives a nice informative cue to the programmer just how they are supposed to work with/extend something. You can clearly see what can/should to be extended. Add to that the requirement for an advanced optimiser to clean up the mess with LTO, and the fact a programmer can never have confidence in the final state of the function, I want it the other way around. I sincerely fear finding myself false-virtual hunting on build night until 2am trying to get the game to hold its frame rate (I already do this in C++, but at least you can grep for and validate them!). Or cutting content because we didn't take the time required to manually scan for false virtuals that could have given us more frame time. I think at the end of the day both arguments are highly arbitrary. virtual and final keywords could probably exist peacefully, and wouldn't dent the learning curve by much, so I don't have any strong argument against virtual. It's just not the one I'd choose. You're welcome to it, but granted that, I have an additional fear that someone with your opinion is capable of writing classes in libs that I might really like to use, but can't, because they are a severe performance hazard. It will be a shame if there is eventually a wealth of D libraries, and only some of them are usable in realtime code because the majority of programmers are blind to this problem. (Again, this is also common in C++. I've encountered many libraries over the years that we had to reject, or even more costly, remove later on after integrating and realising they were unusable)
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 12:10:23 UTC, F i L wrote: Tove wrote: Hmm... well if the x declarations store all NoScan objects in a collection, it could be injected into the constructor token stream later... x!(@GC.NoScan int value;); // modified by x to insert a foreach with GC.setAttr x!(q{this() {/* foreach(...) GC.setAttr(...); */ }); But if x!() pumps out a constructor, how do you add multiple attributes with @GC.NoScan? The constructors would collide. 1. x! would parse all decls at compile time... 2. all attributes that need to modify the constructor is inserted at the points where the x! enabled constructors are declared/implemented... x!(@GC.NoScan @GC.Hot @attribute(Serializable.yes) int value;); x!(q{this() { /* everything from 'x' is auto inserted here */ my; normal; constructor; tokens; });
Re: Proposal: user defined attributes
Tove wrote: 1. x! would parse all decls at compile time... 2. all attributes that need to modify the constructor is inserted at the points where the x! enabled constructors are declared/implemented... x!(@GC.NoScan @GC.Hot @attribute(Serializable.yes) int value;); x!(q{this() { /* everything from 'x' is auto inserted here */ my; normal; constructor; tokens; }); I see, that would work. But why not just build this same operation into the compiler so the definition syntax is the same as usual. The mixin's are powerful but a bit ugly. Not to mention not IDE parser on the planet is going to be able to figure out all that to give you intelligent code-completion.
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 12:39:19 UTC, F i L wrote: Tove wrote: 1. x! would parse all decls at compile time... 2. all attributes that need to modify the constructor is inserted at the points where the x! enabled constructors are declared/implemented... x!(@GC.NoScan @GC.Hot @attribute(Serializable.yes) int value;); x!(q{this() { /* everything from 'x' is auto inserted here */ my; normal; constructor; tokens; }); I see, that would work. But why not just build this same operation into the compiler so the definition syntax is the same as usual. The mixin's are powerful but a bit ugly. Not to mention not IDE parser on the planet is going to be able to figure out all that to give you intelligent code-completion. Yes, I was thinking along these lines... what would be the absolutely bare minimum needed support this from the compiler to make this scheme look first class? what if... the compiler encounters an unknown @token then it would delegate the parsing to a library implementation... which basically would do the above, but it would be hidden from the user... this way we would get a 0 effort(from the perspective of the compiler) and extensible syntax in library covering all future needs.
Re: virtual-by-default rant
Le 18/03/2012 02:23, Manu a écrit : The virtual model broken. I've complained about it lots, and people always say stfu, use 'final:' at the top of your class. That sounds tolerable in theory, except there's no 'virtual' keyword to keep the virtual-ness of those 1-2 virtual functions I have... so it's no good (unless I rearrange my class, breaking the logical grouping of stuff in it). So I try that, and when I do, it complains: Error: variable demu.memmap.MemMap.machine final cannot be applied to variable, allegedly a D1 remnant. So what do I do? Another workaround? Tag everything as final individually? My minimum recommendation: D needs an explicit 'virtual' keyword, and to fix that D1 bug, so putting final: at the top of your class works, and everything from there works as it should. This problem isn't virtual by default at all. It would just flip the problem around. It just show the need of keyword to express the opposite of final, virtual. The same problem occur with const immutable, you cannot go back to the mutable world when you use « const: » for example.
Re: virtual-by-default rant
Le 18/03/2012 03:47, F i L a écrit : I'm a bit confused. Reading through the virtual function's docs (http://dlang.org/function.html#virtual-functions) it says: All non-static non-private non-template member functions are virtual. This may sound inefficient, but since the D compiler knows all of the class hierarchy when generating code, all functions that are not overridden can be optimized to be non-virtual. The compiler can. But ATM, it doesn't. This is an implementation issue, not a language design issue.
null allowing @safe code to do unsafe stuff.
Given a class, that would create a very large object if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used.
Re: virtual-by-default rant
Manu wrote: I mean it can't possibly know the complete 'final' class hierarchy, ie, the big picture. Anything anywhere could extend it. The codegen must assume such. I still don't understand why you think this. The compiler must understand the full hierarchy it's compiling, and like I said before, there are very distinct rules as to what should get virtualed across a lib boundary. Are you saying someone might accidentally override something that's not virtual? That's what 'override' is for. If a method is final, it is a compile error to override in any way, you either need to make the base virtual, or explicitly 'override' on the spot if you want to do that. I was saying that I see how: class Base { // author: Bob void commonNamedMethod() {} } // ~ class Foo : Base { // author: Steve // didn't look at base class, before writing: void commonNamedMethod() {} // therefor didn't realize he overwrote it } is a valid concern of having things default to virtual. I just don't think it would happen often, but I've been known to be wrong. The virtual methods are the exception, not the common case. I don't thinks it's so black and white, and that's why I like having the compiler make the optimization. I think marking (public) methods you know for sure who's functionality you don't want overwritten is often a smaller case than methods who's functionality could *potentially* be overwritten. By letting the compiler optimize un-overwritten methods, you're giving the Class users more freedom over the grey areas without sacrificing performances. However, I also think the level of freedom is largely dependent on the situation. Given the fact that you write highly optimized and tightly controlled core game engine code, I can see why your perspective leans towards control. Given this specialization unbalance, I think that both virtual and final should be available. Explicit virtual even gives a nice informative cue to the programmer just how they are supposed to work with/extend something. You can clearly see what can/should to be extended. This is a good argument. If nothing else, I think there should be a way for Class authors to specify (in a way code-completion can understand) a method attribute which marks a it as being designed to be overwritten. I sincerely fear finding myself false-virtual hunting on build night until 2am trying to get the game to hold its frame rate (I already do this in C++, but at least you can grep for and validate them!). Or cutting content because we didn't take the time required to manually scan for false virtuals that could have given us more frame time. I think tool that maps hierarchy (showing override) would be best. like: dmd -hierarchymap.txt You're welcome to it, but granted that, I have an additional fear that someone with your opinion is capable of writing classes in libs that I might really like to use, but can't, because they are a severe performance hazard. I would argue that any such performance critical libraries should be tightly finalized in the first place. I think you're assuming the compiler can't, in good faith, optimize out virtual functions. Whereas I'm assuming it can.
page size in druntime is a mess
Page size in druntime is sometime a constant (4Kb), sometime calculated, often assumed to be a compile time constant. druntime should define a proper, authoritative, place to calculate that page size, and then use the page calculated here. I did some tests today, and it require quite a lot of changes. But it is mandatory to run D on any system where the page size isn't 4Kb. So first question, where this should be calculated ? In core.memory ?
Re: null allowing @safe code to do unsafe stuff.
On 03/18/2012 02:54 PM, deadalnix wrote: Given a class, that would create a very large object This is the culprit. if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used. This is fighting symptoms.
Re: virtual-by-default rant
On Sunday, 18 March 2012 at 13:54:20 UTC, F i L wrote: […] I think you're assuming the compiler can't, in good faith, optimize out virtual functions. Whereas I'm assuming it can. Which is wrong as long as you don't do link-time optimization, and DMD probably won't in the foreseeable future. I tried to explain that above, think extending Thread, which has already been compiled into druntime, from your application (which is a bad example, because thread member method calls are most probably not performance sensitive, but you get the point). That's just for the technical details, though, as far as the actual language design is concerned, I don't think virtual by default is an unreasonable choice. David
Re: virtual-by-default rant
On 18.03.2012 5:23, Manu wrote: The virtual model broken. I've complained about it lots, and people always say stfu, use 'final:' at the top of your class. That sounds tolerable in theory, except there's no 'virtual' keyword to keep the virtual-ness of those 1-2 virtual functions I have... so it's no good (unless I rearrange my class, breaking the logical grouping of stuff in it). So I try that, and when I do, it complains: Error: variable demu.memmap.MemMap.machine final cannot be applied to variable, allegedly a D1 remnant. So what do I do? Another workaround? Tag everything as final individually? My minimum recommendation: D needs an explicit 'virtual' keyword, and to fix that D1 bug, so putting final: at the top of your class works, and everything from there works as it should. Following this thread and observing that you don't trust optimizer and compiler in many cases or have to double check them anyway, I have a suggestion: do virtual dispatch by hand via func-pointer table and use structs. I'm serious, with a bit of metaprogramming it wouldn't be half bad, and as a bonus you don't have to pay for a monitor field per object as classes do, and in general less compiler magic to keep track of. You also gain the ability to fine tune their layout, the performance maniac side of yours must see the potential it brings :) And since you have a few of virtuals anyway and keep them in constant check it should be double and be much less of a hassle then hunting down and second-guessing the compiler on every single step. Bottom line thought: there is a point when a given feature doesn't bring significant convenience for a specific use case, it's then better to just stop pushing it over. -- Dmitry Olshansky
Re: virtual-by-default rant
On Sunday, 18 March 2012 at 14:27:04 UTC, David Nadlinger wrote: Which is wrong as long as you don't do link-time optimization, and DMD probably won't in the foreseeable future. I tried to explain that above, think extending Thread, which has already been compiled into druntime, from your application (which is a bad example, because thread member method calls are most probably not performance sensitive, but you get the point). Also note that this applies to the general case where you get passed in an arbitrary instance only – if the place where an object is created is in the same translation unit where its methods are invoked, the compiler _might_ be able to prove the runtime type of the instance even without LTO. David
Re: virtual-by-default rant
On Sunday, 18 March 2012 at 14:46:55 UTC, David Nadlinger wrote: On Sunday, 18 March 2012 at 14:27:04 UTC, David Nadlinger wrote: Which is wrong as long as you don't do link-time optimization, and DMD probably won't in the foreseeable future. I tried to explain that above, think extending Thread, which has already been compiled into druntime, from your application (which is a bad example, because thread member method calls are most probably not performance sensitive, but you get the point). Also note that this applies to the general case where you get passed in an arbitrary instance only – if the place where an object is created is in the same translation unit where its methods are invoked, the compiler _might_ be able to prove the runtime type of the instance even without LTO. And thinking even more about it, devirtualization could also be performed by present-day DMD when directly generating an executable with all the modules being passed in via at the command line. This might actually be good enough for smaller projects which don't use separate libraries or incremental compilation. David
Re: null allowing @safe code to do unsafe stuff.
Le 18/03/2012 15:24, Timon Gehr a écrit : On 03/18/2012 02:54 PM, deadalnix wrote: Given a class, that would create a very large object This is the culprit. if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used. This is fighting symptoms. @safe is supposed to be a guarantee. And, even if it is bad practice, in this case we aren't able to ensure that these guarantee are respected. Given that, @safe doesn't guarantee anything. You may think that this isn't a problem, but, what is the point of @safe if it is unable to ensure anything ?
Re: page size in druntime is a mess
On Mar 18, 2012, at 7:12 AM, deadalnix deadal...@gmail.com wrote: Page size in druntime is sometime a constant (4Kb), sometime calculated, often assumed to be a compile time constant. druntime should define a proper, authoritative, place to calculate that page size, and then use the page calculated here. I did some tests today, and it require quite a lot of changes. But it is mandatory to run D on any system where the page size isn't 4Kb. So first question, where this should be calculated ? In core.memory ? rt.memory possibly. That or core.memory.
Re: null allowing @safe code to do unsafe stuff.
On 03/18/2012 04:15 PM, deadalnix wrote: Le 18/03/2012 15:24, Timon Gehr a écrit : On 03/18/2012 02:54 PM, deadalnix wrote: Given a class, that would create a very large object This is the culprit. if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used. This is fighting symptoms. @safe is supposed to be a guarantee. And, even if it is bad practice, in this case we aren't able to ensure that these guarantee are respected. Given that, @safe doesn't guarantee anything. You may think that this isn't a problem, but, what is the point of @safe if it is unable to ensure anything ? No null checks are necessary as long as there is no class that would create such a very large object.
Re: virtual-by-default rant
On 3/18/12 6:37 AM, Manu wrote: On 18 March 2012 06:42, Andrei Alexandrescu seewebsiteforem...@erdani.org mailto:seewebsiteforem...@erdani.org wrote: Then probably struct is what you're looking for. No, I definitely want a class. ref type, gc mem, etc. struct doesn't support virtual at all. I have 2 virtuals, this particular class has around 50 public methods, almost all of which are trivial accessors, called extremely heavily in hot loops. More similar classes to come. Then perhaps it's a good idea to move accessors outside and take advantage of UFCS. I've never in 15 years seen a large-ish class where the majority of methods are virtual. Who writes code like that? It's never come up in my industry at least. I consider thick interfaces and shallow hierarchies good design. An interface that's too small invites inherit to extend approaches and casts. The fact that Java made extend a keyword that really means narrow is quite ironic. Andrei
Re: virtual-by-default rant
On 3/18/12 6:59 AM, F i L wrote: Manu wrote: D knows nothing about the class hierarchy when generating code, I don't know how it can make that claim? How does D not know about class hierarchy when generating code? That doesn't make sense to me. It *has* to know to even generate code. It knows about ancestors of each type but not about descendants. Andrei
Re: virtual-by-default rant
On 3/18/12 8:39 AM, deadalnix wrote: It just show the need of keyword to express the opposite of final, virtual. The same problem occur with const immutable, you cannot go back to the mutable world when you use « const: » for example. Yah, ~const etc. have been suggested a couple of times. Helps casts too. Andrei
Re: null allowing @safe code to do unsafe stuff.
On 3/18/12 10:15 AM, deadalnix wrote: Le 18/03/2012 15:24, Timon Gehr a écrit : On 03/18/2012 02:54 PM, deadalnix wrote: Given a class, that would create a very large object This is the culprit. if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used. This is fighting symptoms. @safe is supposed to be a guarantee. And, even if it is bad practice, in this case we aren't able to ensure that these guarantee are respected. Given that, @safe doesn't guarantee anything. You may think that this isn't a problem, but, what is the point of @safe if it is unable to ensure anything ? Safe guarantees your program doesn't have soft memory errors. It can still have hard memory errors. Andrei
Re: null allowing @safe code to do unsafe stuff.
On 3/18/12 10:19 AM, Timon Gehr wrote: On 03/18/2012 04:15 PM, deadalnix wrote: Le 18/03/2012 15:24, Timon Gehr a écrit : On 03/18/2012 02:54 PM, deadalnix wrote: Given a class, that would create a very large object This is the culprit. if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used. This is fighting symptoms. @safe is supposed to be a guarantee. And, even if it is bad practice, in this case we aren't able to ensure that these guarantee are respected. Given that, @safe doesn't guarantee anything. You may think that this isn't a problem, but, what is the point of @safe if it is unable to ensure anything ? No null checks are necessary as long as there is no class that would create such a very large object. Yah, we need to insert a rule that prevents creating class objects larger than 64KB. Java has the same. Andrei
Re: virtual-by-default rant
Le 18/03/2012 16:26, Andrei Alexandrescu a écrit : On 3/18/12 8:39 AM, deadalnix wrote: It just show the need of keyword to express the opposite of final, virtual. The same problem occur with const immutable, you cannot go back to the mutable world when you use « const: » for example. Yah, ~const etc. have been suggested a couple of times. Helps casts too. Andrei This seems definitively an issue to me. ~const/~final, or mutable/virtual would be a huge benefice to the « : » syntax. And youa re right, it is also a big plus for casting. I would argue for teh last one, especially for the const case, because mutable oppose to both const and immutable, so it isn't the opposite of const. The case is very similar to public/private/protected, and each have it's own keyword, and it doesn't seems armful. Note that the same issue exist for shared (but that one doesn't work anyway).
Re: null allowing @safe code to do unsafe stuff.
Le 18/03/2012 16:30, Andrei Alexandrescu a écrit : On 3/18/12 10:19 AM, Timon Gehr wrote: On 03/18/2012 04:15 PM, deadalnix wrote: Le 18/03/2012 15:24, Timon Gehr a écrit : On 03/18/2012 02:54 PM, deadalnix wrote: Given a class, that would create a very large object This is the culprit. if instantiated, and a null reference, you can access memory in « raw mode ». This is @safe D code, but really isn't. As solution, @safe code should insert tests for null reference, or should prevent null to be used. This is fighting symptoms. @safe is supposed to be a guarantee. And, even if it is bad practice, in this case we aren't able to ensure that these guarantee are respected. Given that, @safe doesn't guarantee anything. You may think that this isn't a problem, but, what is the point of @safe if it is unable to ensure anything ? No null checks are necessary as long as there is no class that would create such a very large object. Yah, we need to insert a rule that prevents creating class objects larger than 64KB. Java has the same. Andrei This is another solution. In this case, we have to ensure that the first 64kb of the system are page protected to detect null pointer deference in druntime.
Re: virtual-by-default rant
This is so much theoretical that I think this should be removed from the D docs. And to be put back when one DMD compiler is able to do this. Otherwise it's just false advertising :-) Is this even possible without LTO/WPO? Extending a class defined in a library you link in (and for which codegen already happened) is certainly possible… David This is not even possible with LTO because new classes could be loaded at runtime. Could somebody please fix this.
Re: virtual-by-default rant
Le 18/03/2012 17:02, Martin Nowak a écrit : This is so much theoretical that I think this should be removed from the D docs. And to be put back when one DMD compiler is able to do this. Otherwise it's just false advertising :-) Is this even possible without LTO/WPO? Extending a class defined in a library you link in (and for which codegen already happened) is certainly possible… David This is not even possible with LTO because new classes could be loaded at runtime. Could somebody please fix this. That is limited to export classes. In this case, final/virtual should be managed very precisely anyway if performance matter.
Re: The definition of templates in D
On 03/18/12 12:05, Derek wrote: On Sun, 18 Mar 2012 21:36:46 +1100, FeepingCreature default_357-l...@yahoo.de wrote: why would you do that To make coding easier to write AND read. what do you want to _do_ Infer template arguments from the data types presented in the data values supplied on the instantiation statement. it sounds like you're frantically trying to nail templates into a shape that they really really really aren't meant for I assumed D templates were a type of template; a model for real runnable code that the compiler can instantiate based on the arguments supplied. Yes but you keep trying to pass runtime arguments to a compiletime construct. I think you understand what templates are but not how to use them. in any case what is wrong with auto add(T)(T t) { return t[0] + t[1]; } It doesn't work. Okay, let's try this. template add(T) { template add(U...) { T add(U u) { T res; foreach (entry; u) res += cast(T) entry; return res; } } } add!int(2, 3, 4.0, 5.0f);
Re: null allowing @safe code to do unsafe stuff.
On 2012-03-18 15:53:42 +, deadalnix deadal...@gmail.com said: Le 18/03/2012 16:30, Andrei Alexandrescu a écrit : On 3/18/12 10:19 AM, Timon Gehr wrote: No null checks are necessary as long as there is no class that would create such a very large object. Yah, we need to insert a rule that prevents creating class objects larger than 64KB. Java has the same. Andrei This is another solution. In this case, we have to ensure that the first 64kb of the system are page protected to detect null pointer deference in druntime. On Mac OS X, the protected area is much smaller. 4 Kb I think on Snow Leopard 32-bit. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: null allowing @safe code to do unsafe stuff.
On 2012-03-18 15:30:44 +, Andrei Alexandrescu seewebsiteforem...@erdani.org said: Yah, we need to insert a rule that prevents creating class objects larger than 64KB. Java has the same. The bug you created for that: http://d.puremagic.com/issues/show_bug.cgi?id=5176 -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Why not finally allow bracket-less top-level keywords?
On 03/18/12 02:23, Manu wrote: The virtual model broken. I've complained about it lots, and people always say stfu, use 'final:' at the top of your class. That sounds tolerable in theory, except there's no 'virtual' keyword to keep the virtual-ness of those 1-2 virtual functions I have... so it's no good (unless I rearrange my class, breaking the logical grouping of stuff in it). So I try that, and when I do, it complains: Error: variable demu.memmap.MemMap.machine final cannot be applied to variable, allegedly a D1 remnant. So what do I do? Another workaround? Tag everything as final individually? My minimum recommendation: D needs an explicit 'virtual' keyword, and to fix that D1 bug, so putting final: at the top of your class works, and everything from there works as it should. See subject. Example; class Foo : Bar final { } as alternative syntax for class Foo : Bar { final { } } Advantages: internally consistent, no need for completely new syntax, final class can be deprecated (it never worked well anyway). Alternate aspects of this syntax change: void foo(ObjectThing ot, int a, int b) with (ot) { } void bar() synchronized { }
Re: Why not finally allow bracket-less top-level keywords?
On 03/18/2012 05:25 PM, FeepingCreature wrote: Advantages: internally consistent, no need for completely new syntax, final class can be deprecated (it never worked well anyway). final class means that the class cannot be inherited from.
Re: virtual-by-default rant
On Sunday, 18 March 2012 at 16:02:04 UTC, Martin Nowak wrote: Is this even possible without LTO/WPO? Extending a class defined in a library you link in (and for which codegen already happened) is certainly possible… David This is not even possible with LTO because new classes could be loaded at runtime. Sure, you can't just devirtualize everything you come across even with LTO, but it greatly increases the portion of calls where you can deduce the actual type of an instance. David
Re: null allowing @safe code to do unsafe stuff.
Le 18/03/2012 17:18, Michel Fortin a écrit : On 2012-03-18 15:53:42 +, deadalnix deadal...@gmail.com said: Le 18/03/2012 16:30, Andrei Alexandrescu a écrit : On 3/18/12 10:19 AM, Timon Gehr wrote: No null checks are necessary as long as there is no class that would create such a very large object. Yah, we need to insert a rule that prevents creating class objects larger than 64KB. Java has the same. Andrei This is another solution. In this case, we have to ensure that the first 64kb of the system are page protected to detect null pointer deference in druntime. On Mac OS X, the protected area is much smaller. 4 Kb I think on Snow Leopard 32-bit. We can page protect the first 64Kb in druntime.
Re: Why not finally allow bracket-less top-level keywords?
Le 18/03/2012 17:49, Timon Gehr a écrit : On 03/18/2012 05:25 PM, FeepingCreature wrote: Advantages: internally consistent, no need for completely new syntax, final class can be deprecated (it never worked well anyway). final class means that the class cannot be inherited from. What is the point to inherit is no virtual method exists ?
Re: null allowing @safe code to do unsafe stuff.
On 3/18/12 11:18 AM, Michel Fortin wrote: On 2012-03-18 15:53:42 +, deadalnix deadal...@gmail.com said: Le 18/03/2012 16:30, Andrei Alexandrescu a écrit : On 3/18/12 10:19 AM, Timon Gehr wrote: No null checks are necessary as long as there is no class that would create such a very large object. Yah, we need to insert a rule that prevents creating class objects larger than 64KB. Java has the same. Andrei This is another solution. In this case, we have to ensure that the first 64kb of the system are page protected to detect null pointer deference in druntime. On Mac OS X, the protected area is much smaller. 4 Kb I think on Snow Leopard 32-bit. I realized I was mistaken. Just looked at http://docs.oracle.com/javase/specs/jvms/se5.0/html/ClassFile.doc.html#88659 and it seems the number of fields added by a class is limited to 64K, but that's fields not bytes and does not count the fields of the base class. In order to be safe, a D implementation must figure what is the protected area size and insert null checks for all fields accesses that go beyond that. Andrei
Re: Why not finally allow bracket-less top-level keywords?
On Sun, 18 Mar 2012 18:07:10 +0100, deadalnix deadal...@gmail.com wrote: Le 18/03/2012 17:49, Timon Gehr a écrit : On 03/18/2012 05:25 PM, FeepingCreature wrote: Advantages: internally consistent, no need for completely new syntax, final class can be deprecated (it never worked well anyway). final class means that the class cannot be inherited from. What is the point to inherit is no virtual method exists ? Access to protected members.
Re: virtual-by-default rant
David Nadlinger wrote: Which is wrong as long as you don't do link-time optimization, and DMD probably won't in the foreseeable future. Are GDC and LDC limited by DMD in this regard? I know LDC has a LTO flag. If GDC/LDC supports LTO and/or DMD will in the eventual future, then I think defaulting to final is best. If you're saying that even with LTO you wouldn't be able to do automatic de-virtualization ever, then I think Manu might be right in saying the model is backwards. I don't know enough about LTO to comment either way though. FeepingCreature wrote: class Foo : Bar final { } as alternative syntax for class Foo : Bar { final { } } Advantages: internally consistent, no need for completely new syntax, final class can be deprecated (it never worked well anyway). Alternate aspects of this syntax change: void foo(ObjectThing ot, int a, int b) with (ot) { } void bar() synchronized { } +1 This syntax makes a lot of sense.
Re: virtual-by-default rant
On Sunday, 18 March 2012 at 17:24:15 UTC, F i L wrote: […] I know LDC has a LTO flag. Unfortunately it doesn't (-O4/-O5 are defunct), but working on seamless LTO integration (and better optimization pass scheduling in general) would be low-hanging fruit for anybody wanting to join LDC development. David
Re: OpenBSD port of dmd?
On Saturday, 17 March 2012 at 01:37:49 UTC, Andrei Alexandrescu wrote: This seems to accomplish little more than well I didn't use else. Again: what exactly is wrong with specialization? The advantage is, that when you write the code, you have _no idea_ what platform/os it might need to run on in the future. You _cannot_ know which version is most appropriate for _all_ new platforms, or even if any of them will work at all. Oh yes I do. Often I know every platform has e.g. getchar() so I can use it. That will definitely work for everyone. Then, if I get to optimize things for particular platforms, great. This is the case for many artifacts. version(PlatformX) { return superfastgetchar(); } else { return getchar(); } Later on, we add support for PlatformY, which also supports superfastgetchar(). If you write code as above then no one will notice. It will just happily use getchar() and suffer because of it. If you static assert on new platforms then it forces you to look at the code for each new platform added and think about what version would be best. As Walter said, you can't know what version will be most appropriate for all new platforms.
Re: OpenBSD port of dmd?
On Saturday, 17 March 2012 at 00:16:39 UTC, Andrei Alexandrescu wrote: Not convinced. They call it specialization, and it's a powerful concept. We use it in std.algorithm all over the place. I think having good-enough defaults works well for std.algorithm, simply because requiring explicit versions of everything would be tedious and error-prone. For platform-dependent code, I think requiring explicit versions is less tedious due to the relative infrequency of having to add a new platform. It's purely a matter magnitude. If std.algorithm only had to support say, five types, with a new type added maybe once a year, I think requiring explicit versions of every parameterized function would be a good idea, too. It forces you to think about things.
Re: OpenBSD port of dmd?
On Sunday, 18 March 2012 at 18:09:42 UTC, Peter Alexander wrote: On Saturday, 17 March 2012 at 01:37:49 UTC, Andrei Alexandrescu wrote: This seems to accomplish little more than well I didn't use else. Again: what exactly is wrong with specialization? The advantage is, that when you write the code, you have _no idea_ what platform/os it might need to run on in the future. You _cannot_ know which version is most appropriate for _all_ new platforms, or even if any of them will work at all. Oh yes I do. Often I know every platform has e.g. getchar() so I can use it. That will definitely work for everyone. Then, if I get to optimize things for particular platforms, great. This is the case for many artifacts. version(PlatformX) { return superfastgetchar(); } else { return getchar(); } Later on, we add support for PlatformY, which also supports superfastgetchar(). If you write code as above then no one will notice. It will just happily use getchar() and suffer because of it. If you static assert on new platforms then it forces you to look at the code for each new platform added and think about what version would be best. As Walter said, you can't know what version will be most appropriate for all new platforms. It should be possible to eat the cake and still have it... even if warnings normally are frowned upon(with good reason), using warnings for this would allow the benefits from both camps... It would allow a painless prototype porting for new operating systems which are similar, yet... even if it does work, at a later point one would still have to go through all the version statements and either silencing the warning, or selecting the optimized path.
Re: Why not finally allow bracket-less top-level keywords?
On 3/18/12, deadalnix deadal...@gmail.com wrote: Le 18/03/2012 17:49, Timon Gehr a écrit : On 03/18/2012 05:25 PM, FeepingCreature wrote: Advantages: internally consistent, no need for completely new syntax, final class can be deprecated (it never worked well anyway). final class means that the class cannot be inherited from. What is the point to inherit is no virtual method exists ? Final classes might be useful for e.g. leaf classes.
Implicit integer casting
So D is really finicky with integer casts. Basically everything that might produce a loss of data warning in C is an outright compile error. This results in a lot of explicit casting. Now I don't take issue with this, actually I think it's awesome, but I think there's one very important usability feature missing from the compiler with such strict casting rules... Does the compiler currently track the range of a value, if it is known? And if it is known, can the compiler stop complaining about down casts and perform the cast silently when it knows the range of values is safe. int x = 123456; x = 0xFF; // x is now in range 0..255; now fits in a ubyte ubyte y = x; // assign silently, cast can safely be implicit I have about 200 lines of code that would be so much more readable if this were supported. I'm finding that in this code I'm writing, casts are taking up more space on many lines than the actual term being assigned. They are really getting in the way and obscuring the readability. Not only masks, comparisons are also often used of limit the range of values. Add D's contracts, there is good chance the compiler will have fairly rich information about the range of integers, and it should consider that while performing casts.
Re: Implicit integer casting
On Mar 18, 2012 3:09 PM, Manu turkey...@gmail.com wrote int x = 123456; x = 0xFF; // x is now in range 0..255; now fits in a ubyte ubyte y = x; // assign silently, cast can safely be implicit This is related to Go's infinitely sized constants. If an expression produces a value out of range then brings it back in range in a constant it still works. int i = 1 100 98; assert( i == 4); Not exactly related but similar idea.
Re: Implicit integer casting
On Sunday, 18 March 2012 at 19:08:54 UTC, Manu wrote: So D is really finicky with integer casts. Basically everything that might produce a loss of data warning in C is an outright compile error. This results in a lot of explicit casting. Now I don't take issue with this, actually I think it's awesome, but I think there's one very important usability feature missing from the compiler with such strict casting rules... Does the compiler currently track the range of a value, if it is known? And if it is known, can the compiler stop complaining about down casts and perform the cast silently when it knows the range of values is safe. int x = 123456; x = 0xFF; // x is now in range 0..255; now fits in a ubyte ubyte y = x; // assign silently, cast can safely be implicit I have about 200 lines of code that would be so much more readable if this were supported. I'm finding that in this code I'm writing, casts are taking up more space on many lines than the actual term being assigned. They are really getting in the way and obscuring the readability. Not only masks, comparisons are also often used of limit the range of values. Add D's contracts, there is good chance the compiler will have fairly rich information about the range of integers, and it should consider that while performing casts. Walter even wrote an article about it: http://drdobbs.com/blogs/tools/229300211
Re: Proposal: user defined attributes
On 3/18/2012 2:47 AM, F i L wrote: Walter Bright wrote: I mean there is modifiable-at-runtime, instance-specific data. In C#, no there isn't. Attributes are simply objects constructed (when gotten) from an Entity's metadata. No memory is stored per-instance unless you manage the objects manually: class A : Attribute { public string s = Default; } [TestA] class C {} static void Main() { // The line below is equivalent to: var a = new A(); // except that it's construction is defined by // metadata stored in type C. var a = typeof(C).GetCustomAttributes(true)[0] as A; a.s = Modification; Console.WriteLine(a.s); // prints Modification // Therefor... var b = typeof(C).GetCustomAttributes(true)[0] as A; Console.WriteLine(b.s); // prints Default } Which looks indistinguishable from modifiable at runtime, instance specific data. Lazy initialization is a standard pattern. No special language features are needed for it. I see how my statements (and code examples) where confusing. I meant that no attribute data is stored per-instance at all (unless traditionally done so), and that attribute objects are simply create in-place at the point of access. So to clarify my previous code a bit: attribute class A { string i = Default; } @A class C { A a; } void main() { auto a = C@A; // create new A based on C assert(is(typeof(a) : A)); // alternatively you could do: auto c = new C(); auto a = c@A; // same as: typeof(c)@A c.a = c@A; // explicitly store attribute } Note: Might want to use the new keyword with class type attributes (auto a = new C@A), but the idea's there. Plus, I think that looks a lot better than the C# version. Sorry, it still looks like standard lazy initialization. I don't know what attributes add to the party. Sounds like a garden variety user-defined data type. It is. Only it's a data type who's construction values are stored in metadata (per entity), and therefor can be used at both compile and run times. By per-entity I mean for each unique Type, Type member, Sub-Type, etc. I don't know of any existing D idiom that is capable of what I presented. Simply use a normal class. Instantiate it at runtime as needed. which, aside from noise, is great for runtime reflection, but it's completely useless (i think) for the compiler because the variables are created through arbitrary strings. Plus, I don't know how you'd store anything but simple variables, which more complex data would require a lot of entity_variables. Something like Jacob's proposal for compile time attributes would be useful here. My failure to understand is about runtime attributes.
Re: Implicit integer casting
On 18 March 2012 21:15, Tove t...@fransson.se wrote: On Sunday, 18 March 2012 at 19:08:54 UTC, Manu wrote: So D is really finicky with integer casts. Basically everything that might produce a loss of data warning in C is an outright compile error. This results in a lot of explicit casting. Now I don't take issue with this, actually I think it's awesome, but I think there's one very important usability feature missing from the compiler with such strict casting rules... Does the compiler currently track the range of a value, if it is known? And if it is known, can the compiler stop complaining about down casts and perform the cast silently when it knows the range of values is safe. int x = 123456; x = 0xFF; // x is now in range 0..255; now fits in a ubyte ubyte y = x; // assign silently, cast can safely be implicit I have about 200 lines of code that would be so much more readable if this were supported. I'm finding that in this code I'm writing, casts are taking up more space on many lines than the actual term being assigned. They are really getting in the way and obscuring the readability. Not only masks, comparisons are also often used of limit the range of values. Add D's contracts, there is good chance the compiler will have fairly rich information about the range of integers, and it should consider that while performing casts. Walter even wrote an article about it: http://drdobbs.com/blogs/**tools/229300211http://drdobbs.com/blogs/tools/229300211 Interesting. This article claims: Can we do better? Yes, with Value Range Propagation, a historically obscure compiler optimization that became a handy feature in the D programming language. But it doesn't seem to work. Am I just doing something wrong?
Re: OpenBSD port of dmd?
On 3/18/2012 11:28 AM, Tove wrote: It should be possible to eat the cake and still have it... even if warnings normally are frowned upon(with good reason), using warnings for this would allow the benefits from both camps... It would allow a painless prototype porting for new operating systems which are similar, yet... even if it does work, at a later point one would still have to go through all the version statements and either silencing the warning, or selecting the optimized path. I know that we've all been trained to avoid copypasta, and have an instinctive ick factor when seeing it. But it really is not painful to do a little copypasta for the system versions, nor does there need to be any new language features for this.
Re: Implicit integer casting
On 3/18/12, Manu turkey...@gmail.com wrote: I'm finding that in this code I'm writing, casts are taking up more space on many lines than the actual term being assigned. Another classic which fails to compile is: import std.random; ubyte c = uniform(0, 256); In the call uniform returns a number anywhere from 0 to and including 255, which can fit perfectly in a ubyte. But I have to use a cast (which is error-prone if I change the right interval), or use a to!ubyte call (which is verbose). Granted for simple-purpose random number generation a cast might be safe..
Re: Implicit integer casting
On Sun, 18 Mar 2012 20:30:15 +0100, Manu turkey...@gmail.com wrote: On 18 March 2012 21:15, Tove t...@fransson.se wrote: On Sunday, 18 March 2012 at 19:08:54 UTC, Manu wrote: So D is really finicky with integer casts. Basically everything that might produce a loss of data warning in C is an outright compile error. This results in a lot of explicit casting. Now I don't take issue with this, actually I think it's awesome, but I think there's one very important usability feature missing from the compiler with such strict casting rules... Does the compiler currently track the range of a value, if it is known? And if it is known, can the compiler stop complaining about down casts and perform the cast silently when it knows the range of values is safe. int x = 123456; x = 0xFF; // x is now in range 0..255; now fits in a ubyte ubyte y = x; // assign silently, cast can safely be implicit I have about 200 lines of code that would be so much more readable if this were supported. I'm finding that in this code I'm writing, casts are taking up more space on many lines than the actual term being assigned. They are really getting in the way and obscuring the readability. Not only masks, comparisons are also often used of limit the range of values. Add D's contracts, there is good chance the compiler will have fairly rich information about the range of integers, and it should consider that while performing casts. Walter even wrote an article about it: http://drdobbs.com/blogs/**tools/229300211http://drdobbs.com/blogs/tools/229300211 Interesting. This article claims: Can we do better? Yes, with Value Range Propagation, a historically obscure compiler optimization that became a handy feature in the D programming language. But it doesn't seem to work. Am I just doing something wrong? It only works within one expression. This works: int n = foo(); byte b = n 0xFF; This does not: int n = foo() 0xFF; byte b = n;