Re: Review of Andrei's std.benchmark
On Friday, 21 September 2012 at 04:44:58 UTC, Andrei Alexandrescu wrote: Andrei Alexandrescu wrote: My claim is unremarkable. All I'm saying is the minimum running time of an algorithm on a given input is a stable and indicative proxy for the behavior of the algorithm in general. So it's a good target for optimization. I reached the same conclusion and use the same method at work. Considering min will converge towards a stable value quite quickly... would it not be a reasonable default to auto detect when the min is stable with some degree of statistical certainty...?
Re: Feature request: extending comma operator's functionality
On Friday, 5 October 2012 at 00:22:04 UTC, Jonathan M Davis wrote: On Friday, October 05, 2012 02:08:14 bearophile wrote: Tommi: > Maybe we forget about commas then, and extend if-clauses so > that you can properly define variables at the beginning of > it. > Separated by semicolons. Regarding definition of variables in D language constructs, there is one situation where sometimes I find D not handy. This code can't work: do { const x = ...; } while (predicate(x)); You need to use: T x; do { x = ...; } while (predicate(x)); Don't forget the with statement, it's not "just" for switches! In many cases it's actually even better than the proposed changes _and_ it works today! import std.stdio; struct d_is_beautiful { int a=1; int b=2; } void main() { with(d_is_beautiful()) if(a==1) writeln("ok"); else writeln("ko:", a); with(d_is_beautiful()) do { ++a; writeln("iter"); } while(a!=b); }
Re: Feature request: extending comma operator's functionality
On Friday, 5 October 2012 at 13:47:00 UTC, monarch_dodra wrote: On Friday, 5 October 2012 at 00:22:04 UTC, Jonathan M Davis wrote: On Friday, October 05, 2012 02:08:14 bearophile wrote: [SNIP] Regarding definition of variables in D language constructs, there is one situation where sometimes I find D not handy. This code can't work: do { const x = ...; } while (predicate(x)); You need to use: T x; do { x = ...; } while (predicate(x)); Yeah. That comes from C/C++ (and is the same in Java and C#, I believe). I don't know why it works that way. It's definitely annoying. [SNIP] - Jonathan M Davis Because it's the only way to guarantee that x exits when you reach the end of the loop. do { if(true) continue; //Yawn... skip. const x = ... ; } while (predicate(x)); //What's x? Basic goto limitations. Unlike goto though, inserting a "continue" should never create a compile error, so the compiler *has* to guarantee that the if condition references nothing inside its own block. It is annoying, but nothing that can't be fixed with a scope bloc. There is a simple way around this... which addresses both concerns raised... 1. Semantics of old code is unchanged. 2. no issue with 'continue' do(const x = ...) { } while(predicate(x));
Re: function overload on full signature?
On Wednesday, 14 November 2012 at 06:52:57 UTC, Rob T wrote: On Wednesday, 14 November 2012 at 02:01:56 UTC, Jonathan M Davis wrote: Is there anything like C++ conversion operators in D? I have used conversion ops in C++ and may want to use a similar feature in D if available. --rt it would be a very useful feature to allow overload on void and 1 other type... as sometimes the return is very expensive to calculate... I have seen this trick used by compiler build-in functions. struct A { int i; string s; alias i this; alias s this; } but... 2 alias this are not currently allowed.
Re: Fixing cyclic import static construction problems
On Friday, 30 November 2012 at 14:09:48 UTC, foobar wrote: Why not simplify? static this() { import std.stdio, a, c; // existing syntax ... } static this() { // no imports -> no dependencies ... } The current behavior should just be dropped. +2 Simple & Elegant.
Re: dereferencing null
On Saturday, 3 March 2012 at 10:13:34 UTC, bearophile wrote: Walter: Adding in software checks for null pointers will dramatically slow things down. Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here? Bye, bearophile It's not a fair comparison, because the Java JIT will optimize the null checks away... Signal handlers might be the answer though, if the same behavior can be guaranteed on all major platforms...
Re: DustMite updated
On Sunday, 4 March 2012 at 19:43:47 UTC, Chad J wrote: On 03/04/2012 02:12 PM, Trass3r wrote: Here's one for DMD 2.057. Knock yourself out ;) (As I mentioned in my other post, I can't build DustMite right now, or I'd do it myself. But if you want one...) -- chad@Hugin ~/dprojects/database $ dmd ice.d entity.(fld) Internal error: e2ir.c 683 -- Done in 280 tests and 20 secs and 934 ms; http://d.puremagic.com/issues/show_bug.cgi?id=7645 Cool! Quite impressive tool, it can be reduced even further though. :) class Entity { class fld() { char t; } } void main() { Entity entity; auto email_addy = entity.fld!().t; }
Re: Breaking backwards compatiblity
On Saturday, 10 March 2012 at 19:01:29 UTC, Alex Rønne Petersen wrote: Personally I'm all for OS X; it's a good UI on top of a Unix shell - what's not to love? But I don't intend to start an OS war or anything here... :P On "paper"(based on features) OS X has been my first OS of choice since the day it was launched... yet I never once tried it, as there are no sane hardware options. :( Since I require a Discrete Graphics Card, "Mac Pro" is the only choice available, but it's a workstation class computer, however considering I don't have any mission critical requirements for my home computer... the 100% price premium is not justified.
Re: Reference counted containers prototype
Is work ongoing on this container prototype? Sounds quite interesting...
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 10:25:20 UTC, F i L wrote: F i L wrote: class CoolClass { mixin Attribute!("int", "a", "Cool", "Heh"); mixin Attribute!("int", "b", "Cool", "Sup"); } void main() { auto c = new CoolClass(); writeln(c.a, ", ", c.b); // 0, 0 writeln(c.a_Cool().s); // Heh writeln(c.b_Cool().s); // Sup } Is it not possible to alias a mixin to just one letter, and then use it to have any syntax we want... something like this: x("@attribute(Serializable.yes) int a");
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 10:50:14 UTC, F i L wrote: x("@attribute(Serializable.yes) int a"); Sure, but there's still the issue of using attributes for codegen. For instance compare: struct Test { @GC.NoScan int value; } to, the current: struct Test { int value; this() { GC.setAttr(&value, NO_SCAN); } } How can we do that with mixin templates? If attributes where a language type the compiler could exploit in a consistent way, It would be *trivial* describing this behavior in a declarative way. Hmm... well if the x declarations store all NoScan objects in a collection, it could be injected into the constructor token stream later... x!("@GC.NoScan int value;"); // modified by x to insert a foreach with GC.setAttr x!(q{this() {/* foreach(...) GC.setAttr(...); */ }); But I guess one lose the opportunity for some compile time magic... in a more efficient way than exposed by the GC.settAttr API.(just pure speculation, I don't have sufficient knowledge of the internal representation of our GC design.).
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 12:10:23 UTC, F i L wrote: Tove wrote: Hmm... well if the x declarations store all NoScan objects in a collection, it could be injected into the constructor token stream later... x!("@GC.NoScan int value;"); // modified by x to insert a foreach with GC.setAttr x!(q{this() {/* foreach(...) GC.setAttr(...); */ }); But if x!() pumps out a constructor, how do you add multiple attributes with @GC.NoScan? The constructors would collide. 1. x! would parse all decls at compile time... 2. all attributes that need to modify the constructor is inserted at the points where the x! enabled constructors are declared/implemented... x!("@GC.NoScan @GC.Hot @attribute(Serializable.yes) int value;"); x!(q{this() { /* everything from 'x' is auto inserted here */ my; normal; constructor; tokens; });
Re: Proposal: user defined attributes
On Sunday, 18 March 2012 at 12:39:19 UTC, F i L wrote: Tove wrote: 1. x! would parse all decls at compile time... 2. all attributes that need to modify the constructor is inserted at the points where the x! enabled constructors are declared/implemented... x!("@GC.NoScan @GC.Hot @attribute(Serializable.yes) int value;"); x!(q{this() { /* everything from 'x' is auto inserted here */ my; normal; constructor; tokens; }); I see, that would work. But why not just build this same operation into the compiler so the definition syntax is the same as usual. The mixin's are powerful but a bit ugly. Not to mention not IDE parser on the planet is going to be able to figure out all that to give you intelligent code-completion. Yes, I was thinking along these lines... what would be the absolutely bare minimum needed support this from the compiler to make this "scheme" look first class? what if... the compiler encounters an unknown @token then it would delegate the parsing to a library implementation... which basically would do the above, but it would be hidden from the user... this way we would get a 0 effort(from the perspective of the compiler) and extensible syntax in library covering all future needs.
Re: OpenBSD port of dmd?
On Sunday, 18 March 2012 at 18:09:42 UTC, Peter Alexander wrote: On Saturday, 17 March 2012 at 01:37:49 UTC, Andrei Alexandrescu wrote: This seems to accomplish little more than "well I didn't use else". Again: what exactly is wrong with specialization? The advantage is, that when you write the code, you have _no idea_ what platform/os it might need to run on in the future. You _cannot_ know which version is most appropriate for _all_ new platforms, or even if any of them will work at all. Oh yes I do. Often I know every platform has e.g. getchar() so I can use it. That will definitely work for everyone. Then, if I get to optimize things for particular platforms, great. This is the case for many artifacts. version(PlatformX) { return superfastgetchar(); } else { return getchar(); } Later on, we add support for PlatformY, which also supports superfastgetchar(). If you write code as above then no one will notice. It will just happily use getchar() and suffer because of it. If you static assert on new platforms then it forces you to look at the code for each new platform added and think about what version would be best. As Walter said, you can't know what version will be most appropriate for all new platforms. It should be possible to eat the cake and still have it... even if warnings normally are frowned upon(with good reason), using warnings for this would allow the benefits from both "camps"... It would allow a painless "prototype porting" for new operating systems which are similar, yet... even if it does work, at a later point one would still have to go through all the version statements and either silencing the warning, or selecting the optimized path.
Re: Implicit integer casting
On Sunday, 18 March 2012 at 19:08:54 UTC, Manu wrote: So D is really finicky with integer casts. Basically everything that might produce a loss of data warning in C is an outright compile error. This results in a lot of explicit casting. Now I don't take issue with this, actually I think it's awesome, but I think there's one very important usability feature missing from the compiler with such strict casting rules... Does the compiler currently track the range of a value, if it is known? And if it is known, can the compiler stop complaining about down casts and perform the cast silently when it knows the range of values is safe. int x = 123456; x &= 0xFF; // x is now in range 0..255; now fits in a ubyte ubyte y = x; // assign silently, cast can safely be implicit I have about 200 lines of code that would be so much more readable if this were supported. I'm finding that in this code I'm writing, casts are taking up more space on many lines than the actual term being assigned. They are really getting in the way and obscuring the readability. Not only masks, comparisons are also often used of limit the range of values. Add D's contracts, there is good chance the compiler will have fairly rich information about the range of integers, and it should consider that while performing casts. Walter even wrote an article about it: http://drdobbs.com/blogs/tools/229300211
Re: Proposal: user defined attributes
On Monday, 19 March 2012 at 06:46:09 UTC, dennis luehring wrote: Am 19.03.2012 01:41, schrieb Walter Bright: I'm sorry, I find this *massively* confusing. What is foo? Why are you serializing something marked "NonSerialized"? I really have no idea what is going on with this. What is the serialize() function? How does any of this tell you how to serialize an int? How is Base.c also a NonSerialized with c in a superclass of it? attributes does not containing code - there just a (at-runtime) queryable information that can be attached to serveral things (like classes, methods, ...) - think of it like double.epsilon - but extendable by users - thats it, and in the c# world these attribute-definitions tend to be something like an class (but without code) in c# you can walk by (runtime)reflection through your code and find out if something is annotated with an special attribute and use the configure information and do something with it - call an constructor, open an connection, generated code (at runtime) - whatever you want its a easy-to-use-buildin-attribution-system thats it - and people like them because c# do all the big magic by giving developers a bunch of attributes that are then used for stuff like serialization, memory-layout, ... a compiletime example of this could be: attribute my_special_attribute { int version; } attribute my_special_attribute2 { string test; } class test { [my_special_attribute(version=2)] int method1(); [my_special_attribute(version=2)] [my_special_attribute2(test="bert")] int method2(); } void main() { auto b = [ __traits(allMembers, D) ]; foreach( auto a; b ) { -->attribute query magic -->auto c = [ __traits(attribute("my_special_attribute", a) ]; -->auto c = [ __traits(attribute("my_special_attribute2", a) ]; //now we know all methods with my_special_attribute //and speical_special_attribute2 and their content (version=2 //and test="bert" //now think of an template or mixing that uses this //information for code-generation or something like that //thats all } } Well I was thinking if we can go one step further than C#, because of D:s CTFE... by introducing a call back from the D compiler to the library CTFE attribute handler... this way we can hide all reflection from the "end user"... and the compiler doesn't need to know anything at all, as the library does the semantic lowering. @GC.NoScan int value; @GC this() {} Compiler Asks library for transformation of unknown @GC bool library("@GC this() {anything...}") -> if the library succeeds it would then transform the string and hand back a lowered mixin to the compiler. mixin(" this() { auto b = [ __traits(allMembers, D) ]; foreach( auto a; b) {DO.GC.Stuff...} anything... } ");
Re: String mixin syntax sugar
On Tuesday, 20 March 2012 at 21:28:25 UTC, Jacob Carlborg wrote: On 2012-03-20 19:25, Mantis wrote: Hello, since people discussed a lot about user-defined attributes recently, I've been thinking about a way to implement it with a string mixins. The problem with them is their syntax - it's far from what we want to use in everyday job. I understand, they should be easily distinguished at use site, but perhaps this may be accomplished in other ways as well. My idea is to translate this kind of statements: # identifier statement into this: mixin( identifier( q{ statement } ) ); I don't like it. I want real user defined attributes. I think the idea has merit, string mixins together with CTFE parsing is the holy grail... because of the current syntax it's not really feasible to use on a per-member basis... but it is possible to use on a struct/class basis... mixin(attr(q{struct Foo { @NonSerialized int x; @NonSerialized int y; int z; }})); Please disregard my broken parser(it's just a proof of concept) However, consider what we could do with the latest CTFE parser advances, coupled with a tighter compiler/library callback interface. import std.stdio; import std.array; import std.string; import std.algorithm; string attr(string complex_decl) { string org_struct; string ser_struct; auto lines = splitLines(complex_decl); { auto decl = split(stripLeft(lines[0])); if(decl[0]=="struct") { org_struct = decl[0] ~ " " ~ decl[1]; ser_struct = decl[0] ~ " " ~ decl[1] ~ "_Serializable"; } else return complex_decl; } foreach(line; lines[1..$]) { auto attr = findSplitAfter(stripLeft(line), "@NonSerialized "); if(attr[0]=="@NonSerialized ") org_struct ~= attr[1]; else { org_struct ~= attr[1]; ser_struct ~= attr[1]; } } return ser_struct ~ "\n" ~ org_struct; } mixin(attr(q{struct Foo { @NonSerialized int x; @NonSerialized int y; int z; }})); void main() { auto m = [ __traits(allMembers, Foo) ]; writeln("Normal members of Foo:", m); auto n = [ __traits(allMembers, Foo_Serializable) ]; writeln("Serializable members of Foo:", n); }
Re: Proposal: user defined attributes
On Wednesday, 21 March 2012 at 08:08:12 UTC, Jacob Carlborg wrote: On 2012-03-21 01:35, Adam D. Ruppe wrote: On Wednesday, 21 March 2012 at 00:03:28 UTC, James Miller wrote: So you'd just very simply do: struct MyAttribute { bool activated; } // @note is just like doing internalTuple ~= MyAttribute(true) // and MyAttribute is of course just a plain old struct initializer @note(MyAttribute(true)) int a; To check it: foreach(note; __traits(getNotes, member_a)) static if(is(typeof(note) == MyAttribute) { // do what you want here, ignore types you don't know } That's basically my initial proposal and how annotations work in Java. What data can be packed into annotations? Being able to use custom types is an important part of my idea here. I think any type that a template can take (as a value) + other attributes/annotations. With the mixin improvement proposal any arbitrarily complex feature can be implemented in the library, appearing to enjoy first class syntax with just 1 extra character penalty vs the compiler. #struct Foo { @NonSerialized int x; @NonSerialized int y; int z; } Just imagine the next step, if the CTFE interface was based on AST:s instead of strings...
Re: Proposal: user defined attributes
On Wednesday, 21 March 2012 at 15:11:47 UTC, Andrei Alexandrescu wrote: class Foo { int a; int b; mixin NonSerialized!(b); } I think the liability here is that b needs to appear in two places, once in the declaration proper and then in the NonSerialized part. (A possible advantage is that sometimes it may be advantageous to keep all symbols with a specific attribute in one place.) A possibility would be to make the mixin expand to the field and the metadata at once. In case my proof of concept which was posted in another thread was overlooked... it was my goal to address this very issue... also it's possible to change the datatype of the members in the "parallel annotation class"(Foo_Serializable in my limited example), and store any extra user data there if so desired while keeping the real class clean... and then simply traverse with __traits. import std.stdio; import std.array; import std.string; import std.algorithm; string attr(string complex_decl) { string org_struct; string ser_struct; auto lines = splitLines(complex_decl); { auto decl = split(stripLeft(lines[0])); if(decl[0]=="struct") { org_struct = decl[0] ~ " " ~ decl[1]; ser_struct = decl[0] ~ " " ~ decl[1] ~ "_Serializable"; } else return complex_decl; } foreach(line; lines[1..$]) { auto attr = findSplitAfter(stripLeft(line), "@NonSerialized "); if(attr[0]=="@NonSerialized ") org_struct ~= attr[1]; else { org_struct ~= attr[1]; ser_struct ~= attr[1]; } } return ser_struct ~ "\n" ~ org_struct; } mixin(attr(q{struct Foo { @NonSerialized int x; @NonSerialized int y; int z; }})); void main() { auto m = [ __traits(allMembers, Foo) ]; writeln("Normal members of Foo:", m); auto n = [ __traits(allMembers, Foo_Serializable) ]; writeln("Serializable members of Foo:", n); }
Re: Three Unlikely Successful Features of D
On Wednesday, 21 March 2012 at 21:01:10 UTC, Kapps wrote: On the topic of import, mixin imports are something that I believe will eventually become a great deal more popular than they are today. Definitely mixin imports and ctfe gets my 3 votes!
Re: Proposal: user defined attributes
On Sunday, 25 March 2012 at 15:24:18 UTC, Jacob Carlborg wrote: On 2012-03-22 02:23, Tove wrote: mixin(attr(q{struct Foo { @NonSerialized int x; @NonSerialized int y; int z; }})); void main() { auto m = [ __traits(allMembers, Foo) ]; writeln("Normal members of Foo:", m); auto n = [ __traits(allMembers, Foo_Serializable) ]; writeln("Serializable members of Foo:", n); } Just really ugly and it creates a new type, completely unnecessary if D supported user defined attributes. Well... "eye of the beholder"... I think that's exactly the beautiful part, because: 1) The original type is 100% unaltered... 2) Easy for the compiler to optimize the new type away as it's never instantiated, nor used beyond ctfe reflection i.e. 0 runtime overhead. 3) It trivially allows using the built-in traits system everyone already is familiar with. but I wonder if one can do better with a mixin template, accessing it's "parent"...
Re: regex direct support for sse4 intrinsics
On Tuesday, 27 March 2012 at 09:51:07 UTC, bearophile wrote: Dmitry Olshansky: Speaking more of run-time version of regex, it is essentially running a VM that executes instructions that do various kinds of match-this, match-that. The VM dispatch code is quite slow, the optimal _threaded_ code requires either doing it in _assembly_ or _computed_ goto in the language. The VM _dispatch_ takes up to 30% of time in the default matcher. I have used computed gotos in GCC-C to implement some quite efficient finite state machines to be used in computational biology. I've seen 20%+ speedups compared to my alternative switch-based implementation. So I'd like computed gotos in D too. While I am in favor of all enhancements which improve low-level access, I'm very surprised by your findings by computed gotos... the compiler I am most used to(rvct for arm)... seems proficient in emitting jump table instructions(TBB, TBH) for thumb2... but based on your findings I will definitely re-check the generated asm. Could it be that the compiler "heuristics" simply is less than optimal... and an alternative would be to force a specific implementation with a pragma? or the recent @annotation syntax... pragma(switch, "jumptable") pragma(switch, "binary-search-tree") it would have the benefit of not having to re-factor the code and one could easily benchmark which solution is the fastest for a different inputs...
Re: custom attribute proposal (yeah, another one)
On Friday, 6 April 2012 at 14:23:51 UTC, Steven Schveighoffer wrote: On Fri, 06 Apr 2012 10:11:32 -0400, Manu wrote: On 6 April 2012 16:56, Steven Schveighoffer wrote: On Fri, 06 Apr 2012 09:53:59 -0400, Timon Gehr wrote: I think this proposal should be merged with Johannes' one. It is very similar. I think the main distinction is that I focused on the fact that the compiler already has a mechanism to check and run CTFE functions. Except you're using a function, which I don't follow. How does that work? Where do you actually store the attribute data? Just attaching any arbitrary thing, in particular, a struct (as in Johannes proposal) is far more useful. It also seems much simpler conceptually to me. It's nice when things are intuitive... You can store a struct, just return it from an attribute function. e.g.: @attribute Author author(string name) { return Author(name);} Why should we be restricted to only structs? Or any type for that matter? The benefit to using CTFE functions is that the compiler already knows how to deal with them at compile-time. i.e. less work to make the compiler implement this. I also firmly believe that determining what is allowed as attributes should be opt-in. Just allowing any struct/class/function/etc. would lead to bizarre declarations. -Steve I think this proposal pretty much covers what I would expect from 'custom attributes'... but what about adding a D twist, getting "what we annotate" as a template parameter so that one among other things can make use of Template Constraints?
Re: custom attribute proposal (yeah, another one)
On Friday, 6 April 2012 at 17:44:25 UTC, Steven Schveighoffer wrote: On Fri, 06 Apr 2012 13:33:33 -0400, Tove wrote: I think this proposal pretty much covers what I would expect from 'custom attributes'... but what about adding a D twist, getting "what we annotate" as a template parameter so that one among other things can make use of Template Constraints? Interesting, so something like: @attribute string defaultName(T)() if(is(typeof(T.init.name))) { return T.init.name;} Not sure how much this gives us, but it definitely feels doable. -Steve yes, exactly... well, once library designers start getting creative, one of the immediate benefits would be, easy to understand error-messages.
Re: Small Buffer Optimization for string and friends
On Sunday, 8 April 2012 at 13:53:07 UTC, H. S. Teoh wrote: On Sun, Apr 08, 2012 at 12:56:38AM -0500, Andrei Alexandrescu wrote: [...] 1. What happened to the new hash project? We need to take that to completion. [...] Sorry, I've been busy at work and haven't had too much free time to work on it. The current code is available on github: https://github.com/quickfur/New-AA-implementation The major outstanding issues are: - Qualified keys not fully working: the current code has a few corner cases that don't work with shared/immutable/inout keys. One major roadblock is how to implement this: alias someType T; inout(T) myFunc(inout(T) arg, ...) { int[inout(T)] aa; ... } The problem is that inout gets carried over into the AA template, which breaks because it instantiates into something that has: struct Slot { hash_t hash; inout(T) key; // <-- this causes a compile error Value value; } Ideally, AA keys should all be stored as immutable inside the AA, and automatically converted to/from the qualified type the user specified. - Template bloat: the current code uses template member functions, and will instantiate a new function for every implicit conversion of input key types. This also depends on IFTI, which has some quirks (compiler bugs) that make the code ugly (e.g., strings and arrays not treated equally by the compiler, requiring hacks to make implicit conversion work). Timon has suggested an alternative way of handling implicit conversions, which I think is better, but I need to take some time to actually implement it. - Static initialization of AA's (AA literals that compile directly into object code). This should be possible in principle, but I've run into what may be a CTFE bug that prevents it from working. - A not-so-major issue is to finish the toHash() implementations for all native types (currently it works for some common key types, but coverage is still incomplete). Once this is done, we can finally get rid of getHash from TypeInfo; UFCS will let us simply write x.toHash() for pretty much any type x. Once these issues are resolved, there remains the major task of actually integrating this code with druntime/dmd. A lot of work is expected on the dmd end, because of the current amount of hacks in dmd to make AA's work. T doesn't this work? immutable std.traits.Unqual!(inout(T)) key;
x32-abi + D = fat pointers?
I just stumbled upon this: https://sites.google.com/site/x32abi/home /rant I remember back in the glorious MC68000 days(24bit addressing)... leaving 8bits for creative optimizations... until 68020 took away all the fun that is. So... I was kinda upset that x86-64 was explicitly designed not to permit such tricks despite having a 48bit addressing mode... hmpf shooting oneself in the foot is the fun part of programming. /end rant Anyway... x32-abi to the rescue, access to 64bit registers just as normal... but using efficient 32bit pointers, suddenly there are 32 bits free to play with, D-slices passed around in a single normal 64bit register anyone? actually 32bits is a lot... one could also possibly imagine creative flags for the GC.
Re: Orphan ranges
On Sunday, 15 April 2012 at 16:34:12 UTC, Andrei Alexandrescu wrote: I'm making good progress on an allocator design. If things come together as I hope, it'll kick some serious ass. I'm currently looking at four allocation models: * straight GC, safe (no deallocation) * GC + the ability to free memory * malloc/free, unsafe, buyer beware (STL-level safety) * reference counted (based on either malloc or GC+free) * region (scoped) I need to kink out a few details, most important being - is safety part of the allocator or an extricable property that's a different policy? For now I have a related question about orphan ranges. Consider this motivating example: void main() { int[] x; { auto b = new int[20]; x = b[5 .. $ - 5]; } ... use x ... } The range x is a 10-elements range that originates in a 20-element array. There is no safe way to access the original array again, so in a sense the other 10 elements are "lost". That's why I call x an orphan range - a range of which original container is gone. Built-in arrays work routinely like that, and in fact the originating arrays are not distinguished by type in any way from their ranges (be they orphan or not). The question is, what should std.container do about orphan ranges in general? Should it allow them, disallow them, or leave the decision open (e.g. to be made by the allocator)? Depending on what way we go, the low-level design would be quite different. Thanks, Andrei Wow, cool! To flat out disallow Orphan ranges is imho too restrictive, especially considering we already have a safe solution, what is missing is an unsafe(no overhead) version. If the design can handle safe/unsafe as a policy for a stack(scoped) based allocator... that would be kick ass, indeed.
Re: Why is complex being deprecated again?
On Sunday, 15 April 2012 at 21:09:13 UTC, Lars T. Kyllingstad wrote: I absolutely do not think it does. There is nothing you can do with a pure imaginary type that you cannot do with a complex type. Furthermore, the imaginary numbers have the unfortunate property of not being closed under multiplication and division, which is troublesome for generic code: ireal x; x *= x; // boom It seems nobody noticed, but I did in fact rewrite all of std.complex two years ago (almost to the day) in preparation for the deprecation of the built-in types. If there is anything missing from the module, I will be happy to add it. -Lars The quote on the 'Semantics' section has a counter example... http://dlang.org/cppcomplex.html
Re: Accessing UDA of private field
On Monday, 7 January 2013 at 10:19:45 UTC, Jacob Carlborg wrote: On 2013-01-06 23:33, Philippe Sigaud wrote: Good thinking. It's not pretty but it works. Thanks. Maybe it can be hidden inside a template? Yeah, I'll see what I can do. in which context does private fail? I'm using something like this: struct my_struct { private: @(1) int t1; @(2) int t2; @(3) int t3; } foreach(m; __traits(allMembers, my_struct)) with(my_struct.init) pragma(msg, __traits(getAttributes, mixin(m)));
Re: Accessing UDA of private field
On Monday, 7 January 2013 at 13:36:47 UTC, Jacob Carlborg wrote: On 2013-01-07 12:59, Tove wrote: in which context does private fail? I'm using something like this: struct my_struct { private: @(1) int t1; @(2) int t2; @(3) int t3; } foreach(m; __traits(allMembers, my_struct)) with(my_struct.init) pragma(msg, __traits(getAttributes, mixin(m))); Using a mixin works. but this seems to work too? import std.traits; struct my_struct { private: @(1) int t1; @(2) int t2; @(3) int t3; } void main() { foreach(m; __traits(allMembers, my_struct)) pragma(msg, __traits(getAttributes, __traits(getMember, my_struct, m))); }
Re: manual memory management
On Wednesday, 9 January 2013 at 20:16:04 UTC, Andrei Alexandrescu wrote: On 1/9/13 12:09 PM, Mehrdad wrote: It's memory-safe too. What am I missing here? What you're missing is that you define a store that doesn't model object references with object addresses. That's what I meant by "references are part of the language". If store is modeled by actual memory (i.e. accessing an object handle takes you to the object), you must have GC for the language to be safe. If store is actually indirected and gives up on the notion of address, then sure you can implement safety checks. The thing is everybody wants for references to model actual object addresses; indirect handles as the core abstraction are uninteresting. Andrei Quote from OpenBSD's malloc implementation: "On a call to free, memory is released and unmapped from the process address space using munmap." I don't see why this approach is less safe than a GC... in fact, I claim it's safer, because it's far simpler to implement, and thus less likely to contain bugs and in addition it's easy to make performance vs safety trade-offs, simply by linking with another memory-allocator.
Re: dmd json file output
On Tuesday, 22 January 2013 at 08:02:26 UTC, Rainer Schuetze wrote: > "type" : { > "mangled" : "PPPi", > "pretty" : "int***", > } I would favour plain "type" : "int***". Consider it will be parsed from many different languages, C#, Java... etc and the generic tools may be able to handle json from multiple languages, and in this context have no reason to use differently mangled types for different languages. "int***" is both compact and easy enough to parse anyway. Even for pure D-based tools, for unit-test reasons it could be useful to have the pretty name to compare against, thus Rainer's proposal is a reasonable compromise.
Re: Implementing Half Floats in D
On Monday, 28 January 2013 at 23:58:40 UTC, Walter Bright wrote: On 1/28/2013 3:30 PM, Era Scarecrow wrote: On Monday, 28 January 2013 at 23:11:11 UTC, Walter Bright wrote: http://www.drdobbs.com/cpp/implementing-half-floats-in-d/240146674 Anyone care to do the reddit honors? [quote] and crushed back down to 16 bytes for storage. [/quote] Should be bits. Otherwise it looks really well done. thank you. Sorry that didn't get caught in review! "HalfFloat h = hf!1.3f;" Maybe you could also demonstrate that it's possible to implement another literal syntax? HalfFloat h = 1.3.hf; some people will prefer that for sure.
Re: Request for comments: std.d.lexer
On Friday, 1 February 2013 at 11:06:02 UTC, Walter Bright wrote: On 1/30/2013 8:44 AM, Dmitry Olshansky wrote: In allocation scheme I proposed that ID could be a 32bit offset into the unique identifiers chunk. That only works if you know in advance the max size the chunk can ever be and preallocate it. Otherwise, you have no guarantee that the next allocated chunk will be within 32 bits of address of the previous chunks. This can easily be archived by preallocating file.size bytes... it will be x orders of magnitude too much, but it doesn't matter, as in the end only the cache locality matters.
Re: DIP23 draft: Fixing properties redux
On Monday, 4 February 2013 at 16:01:45 UTC, Steven Schveighoffer wrote: @property int foo(); auto x = &foo; // error int delegate() x = &foo; // ok -Steve I was going to submit the same suggestion, but didn't find time to until just now. gets my vote.
Re: DIP26: properties defined
On Saturday, 9 February 2013 at 03:13:47 UTC, Michel Fortin wrote: It's really great to not have to write boilerplate functions when default behaviour is perfectly fine. I've been using Objective-C for a while now and the recent changes where it automatically synthesize a variable, a getter, and a setter when declaring a property (unless you provide your own) are truly delightful. @property int a; I would prefer if @property simply disallows '&' then it doesn't have to be lowered into anything and can stay a field... if you later decide to add a "real" getter/setter, it still would be source compatible and you wouldn't have to refactor the source.
Re: DIP23 draft: Fixing properties redux
On Wednesday, 6 February 2013 at 01:40:37 UTC, Andrej Mitrovic wrote: On 2/6/13, Jonathan M Davis wrote: That's why some of us have suggested making it so that you can mark variables with @property. What Jonathan means is this: struct S { int var; // modifiable, can take address } Now suppose later you want to turn var into a property: struct S { @property int var(); @property void var(int); } This potentially breaks code if the user-code was using a pointer to the public var field in the previous version of your library. So instead we should have the ability to annotate fields with @property: struct S { @property int var; // modifiable, can *not* take address } There's no run-time cost, but it disallows taking the address of var, and it allows you to introduce property functions in the future without breaking user-code. It is also possible to first start with setters/getters and then switch to a public field(!) Which leads to the conclusion, in order for the property abstraction to be complete, address taking of *anything* annotated with @property should either NOT be allowed... @property int var(); // can *not* take address @property void var(int); // can *not* take address @property int var; // can *not* take address ... or we have to guarantee that the type remains unchanged... which is problematic due to different types of the getter and setter, which would force one to always specify the expected type rather than relying on auto. @property int var; int delegate()d_get = &var; void delegate(int) d_set = &var;
Re: Alias syntax removal
On Sunday, 10 February 2013 at 14:42:50 UTC, kenji hara wrote: 2013/2/10 kenji hara Why I argue that the syntax `alias this = sym;` is wrong? Because: Benefits of the proposed syntax are: 2a. It is consistent with class inheritance syntax `class C : B {}`. 2b. It is scalable for multiple alias this feature, as like `alias this : sym1, sym2, ...;` . 2a. I agree. 2b. I always assumed multiple alias would be introduced like this... alias this = sym1; alias this = sym2; ... which also is "needed" if you use a "3rd party library mixin" in your struct(which internally uses alias this), so even with the ':' syntax it's anyway required to support being able to use it multiple times: alias this : sym1; alias this : sym2; So I don't think 2b speaks in favor of the new syntax.
Re: DIP25 draft available for destruction
On Wednesday, 6 February 2013 at 21:40:00 UTC, Andrei Alexandrescu wrote: On 2/6/13 3:02 PM, Andrej Mitrovic wrote: Also the DIP argues that addressOf solves the @property issue w.r.t. return values. I've proposed we use an .addressOf property which only works on @property functions, and I saw no arguments against it. There aren't, but a library approach is better than a magic work, all other things being equal. Andrei struct S { @property int var(); @property void var(int); } The .addressOf property gave me the idea of solving the getter/setter issue, by having two properties... var.getter var.setter maybe it could be added to your library approach though?
Re: Java binaries
On Sunday, 17 February 2013 at 03:26:13 UTC, js.mdnq wrote: Would it ever be possible to compile D source directly to java to take advantage of what java offers. (e.g., the ability to run d code inside a browser) I'm not talking about necessarily fully fledged functionality(obviously stuff like asm isn't going to work) but basically the ability to use D's syntax and some of it's compiler features(mixins, templates, etc). depends on what you mean with "run inside a browser", I would use NaCl instead, if I wanted to run D in a browser, but of course it requires Chrome. http://code.google.com/p/nativeclient/
Re: optional parens everywhere except last position of function chain.
On Wednesday, 27 February 2013 at 18:55:37 UTC, timotheecour wrote: Please let me know what you think. spontaneously... I love it!
Re: Vote for std.process
On Friday, 12 April 2013 at 15:43:27 UTC, Steven Schveighoffer wrote: On Fri, 12 Apr 2013 04:14:15 -0400, Manu I'd use string[]. You mean with format "a=b"? I suppose that's possible, though horrible to work with before passing in. Plus what happens if you have ["a=b", "a=c"] ? AA's prevent these kinds of mistakes/contradictions. I prefer Manu's idea with the API accepting string[], it's closer to the native format. Then you could simply provide a convenience conversion from a map... ex env!["foo" : "bar"] which would convert it to ["foo=bar"]. This also has the added benefit of being self documenting(considering the lack of named parameters). But most importantly So the user has a free choice of constructing the env parameter manually in the most efficient way or using the lazy convenience function.
Re: Vote for std.process
On Friday, 12 April 2013 at 20:24:05 UTC, Steven Schveighoffer wrote: On Fri, 12 Apr 2013 15:26:12 -0400, Tove wrote: So for the most convenient/common case, you want to add an allocation? with the original proposal there is one anyway... But with my suggested approach you could create many processes reusing the same env... only paying the conversion/allocation cost once(outside of the loop).
Re: Vote for std.process
On Friday, 12 April 2013 at 20:52:55 UTC, Steven Schveighoffer wrote: On Fri, 12 Apr 2013 16:32:37 -0400, Tove wrote: On Friday, 12 April 2013 at 20:24:05 UTC, Steven Schveighoffer wrote: On Fri, 12 Apr 2013 15:26:12 -0400, Tove wrote: So for the most convenient/common case, you want to add an allocation? with the original proposal there is one anyway... I meant add an additional allocation to what is there. There needs to be one allocation to collect all the variables into one long string (on Windows), and with your suggestion, it has to go through an intermediate "key=value" format. But with my suggested approach you could create many processes reusing the same env... only paying the conversion/allocation cost once(outside of the loop). This would be attractive. But I still want a indexable object. Having to generate a=b strings is awkward. It could be something that does the right thing on POSIX (generate "a=b" under the hood) or Windows (Probably can cache the generated environment string). -Steve Hmhm, I see your point. Could our custom indexable object have 'alias this' to an 'union env'? which is the in param to the process api?
Re: DIP 36: Rvalue References
On Monday, 22 April 2013 at 20:02:12 UTC, Andrei Alexandrescu wrote: 4. Above all this is a new language feature and again we want to resort to adding new feature only if it is clear that the existing features are insufficient and cannot be made sufficient. In particular we are much more inclined to impart real, demonstrable safety to "ref" and to make "auto ref" work as a reference that can bind to rvalues as well as lvalues. Why isn't DIP36 "scope ref" be future compatible with a future safe "auto ref"? ... and if in the future the compiler would be able to infer "scope ref" from "auto ref", this entire DIP could be reused, with the benefit that people could start using this functionality already today(there is a full pull request with a very small delta), before the auto inference is in place.
Re: rvalue references
On Tuesday, 23 April 2013 at 07:18:41 UTC, Diggory wrote: I'd still like someone to explain how exactly "scope ref" would differ from "ref" if DIP25/DIP35 were implemented. If the only difference is that "scope ref" can accept rvalues then why would you ever use normal "ref"? There are no extra restrictions needed on "scope ref" over and above normal "ref" under the assumption of DIP25/DIP35. DIP25 imposes a number of code-breaking restrictions even in @system code, if DIP36 was in place, one could consider imposing the DIP25 restrictions only in SafeD. Furthermore if one day the compiler would be sufficiently smart to infer scope automatically, there still would be an important difference between 'ref' and 'scope ref'. ref rvalue ref would only work if the compiler succeeds in inferring scope, it could take a conservative approach to make sure it always err:s in the harmless direction... i.e. any '&' or any asm block is an automatic failure. scope ref Works unless the compiler can prove it wrong(also usable from SafeD if marked with @trusted).
Re: rvalue references
On Tuesday, 23 April 2013 at 09:06:52 UTC, deadalnix wrote: On Tuesday, 23 April 2013 at 08:41:16 UTC, Tove wrote: DIP25 imposes a number of code-breaking restrictions even in @system code, if DIP36 was in place, one could consider imposing the DIP25 restrictions only in SafeD. Furthermore if one day the compiler would be sufficiently smart to infer scope automatically, there still would be an important difference between 'ref' and 'scope ref'. That is the important issue to solve. Many solution can jeopardize DIP36, which is why it must be delayed. Usually conflating issue in adhoc solution ends up in crap that must be sorted out later. I see it as a future proof feature, not an issue. You want it to be a difference, so you can override the default compiler behavior.
Re: Mixin template parameters / mixin template literals
On Wednesday, 24 April 2013 at 02:18:07 UTC, Luís Marques wrote: Consider: sort!("a > b")(array); how about? sort!(q{a > b})(array); http://dlang.org/lex.html#TokenString
Re: rvalue references
On Wednesday, 24 April 2013 at 12:38:19 UTC, Andrei Alexandrescu wrote: On 4/24/13 6:27 AM, Diggory wrote: Anyway, it seems in general that everyone thinks DIP25A should be implemented, or am I mistaken? I'd like to work a bit more on it before a formal review. Andrei If you find the time one day, please revisit the "Taking address" section. I'm convinced that the goal of DIP25 could be fully realized even with some of the restrictions relaxed/lifted, with less code-breakage as result. In particular: Allowing '&' for non-ref parameters and "Stack-allocated locals" in @system. It encourages bad programming style where heap is preferred over stack, just to silence the compiler. Yes '&' is dangerous but it's a separate issue, why conflate a "Sealed references" DIP with restrictions on normal "non ref" C-style systems programming? Thanks for reading this far...
Re: Does D have too many features?
On Saturday, 28 April 2012 at 18:48:18 UTC, Walter Bright wrote: Andrei and I had a fun discussion last night about this question. The idea was which features in D are redundant and/or do not add significant value? A couple already agreed upon ones are typedef and the cfloat, cdouble and creal types. What's your list? garbage collector *duck and run* The point I'm trying to make is... normally I would use around 4-5 different languages depending on _what_ problem I'm currently solving... occasionally even languages I'm not particularly proficient with, just because a language might have an edge in a certain domain... however 'D' basically is good enough at "everything"... with some few exceptions. So what one person considers redundant, is integral to someone else with a different background... no, D doesn't have too many features.
Re: How can D become adopted at my company?
On Sunday, 29 April 2012 at 22:13:22 UTC, Manu wrote: Is it technically possible to have a precise GC clean up all unreferenced memory in one big pass? yes, but unless it's also moving/compacting... one would suffer memory fragmentation... so I would imagine TempAlloc is a better fit?
Re: How can D become adopted at my company?
On Sunday, 29 April 2012 at 23:04:00 UTC, Manu wrote: In some cases I'm comfortable with that type of fragmentation (large regularly sized resources), although that leads me to a gaping hole in D's allocation system... Hmmm I see, also I was thinking... since we have TLS, couldn't we abuse killing threads for fast deallocations? While adding persistent data to __gshared? There is no way to request aligned memory. I can't even specify I feel your pain, couldn't agree more.
Re: scope ref const(T) --> error?!
On Thursday, 3 May 2012 at 18:28:19 UTC, Mehrdad wrote: What's wrong with passing a struct as scope ref const? I want to avoid copying the struct, and its information is only read inside the function... ref scope? hm? What additional semantics do you desire from that construct, which 'const ref' doesn't provide? A a; void fun(const scope ref A x) { // x goes out of scope, destroy it... oops it's a global variable!? } fun(a);
Re: scope ref const(T) --> error?!
On Thursday, 3 May 2012 at 22:25:45 UTC, Ali Çehreli wrote: On 05/03/2012 03:21 PM, Tove wrote: > On Thursday, 3 May 2012 at 18:28:19 UTC, Mehrdad wrote: >> What's wrong with passing a struct as scope ref const? >> >> I want to avoid copying the struct, and its information is only read >> inside the function... > > ref scope? hm? What additional semantics do you desire from that > construct, which 'const ref' doesn't provide? scope is a not-yet-implemented promise of the function. It says: "trust me, I will not use this reference outside of the function." > A a; > > void fun(const scope ref A x) > { > // x goes out of scope, destroy it... oops it's a global variable!? scope does not destroy. scope makes this illegal (assuming that A is a class type): a = x; There is deprecated use of the 'scope' keyword but this is not it. > } > > fun(a); > Ali right, thanks. I forgot about that, since it was never implemented I didn't use it. But nevertheless... the actual implemented semantics is the same for parameters as for the deprecated function body case, at the end of the function the parameter goes out of scope too! i.e. destructor should be called.
Re: scope ref const(T) --> error?!
On Thursday, 3 May 2012 at 22:43:16 UTC, Tove wrote: scope does not destroy. scope makes this illegal (assuming that A is a class type): a = x; There is deprecated use of the 'scope' keyword but this is not it. > } > > fun(a); > Ali right, thanks. I forgot about that, since it was never implemented I didn't use it. But nevertheless... the actual implemented semantics is the same for parameters as for the deprecated function body case, at the end of the function the parameter goes out of scope too! i.e. destructor should be called. Hmmm sorry for the confusion, I was living under the delusion that: scope class A{} void fun(scope A x){} fun(new A()); did something, but it doesn't. ;)
Re: run-time stack-based allocation
On Tuesday, 8 May 2012 at 07:03:35 UTC, Gor Gyolchanyan wrote: Cool! Thanks! I'l definitely check it out! I hope it's DDOCed :-D I just invented an absolutely wicked way of using alloca() in the parent context... unfortunately frame_size is static but with some work, it's still an usable method since it's mutable! struct Wicked { static int frame_size = 0; auto Create(void* buf=alloca(frame_size)) { for(byte i=0;i
Re: run-time stack-based allocation
On Thursday, 10 May 2012 at 03:03:22 UTC, Andrei Alexandrescu wrote: On 5/9/12 3:17 PM, Tove wrote: On Tuesday, 8 May 2012 at 07:03:35 UTC, Gor Gyolchanyan wrote: Cool! Thanks! I'l definitely check it out! I hope it's DDOCed :-D I just invented an absolutely wicked way of using alloca() in the parent context... Yah, me too. http://forum.dlang.org/thread/i1gnlo$18g0$1...@digitalmars.com#post-i1gql2:241k6o:241:40digitalmars.com I found it by googling for my name and "dark" and "devious" :o). Andrei Muharrr, way cool :D We seriously need a highly visible "blog:ish page" (which was suggested in another lost thread), listing some useful D gems... I have a feeling these forums are a treasure trove with forgotten snippets...
Re: Optional parameters referring to previous parameters?
On Thursday, 10 May 2012 at 15:51:08 UTC, Steven Schveighoffer wrote: On Thu, 10 May 2012 11:48:25 -0400, Steven Schveighoffer wrote: And BTW, it's not just sugar -- you are saving a function call and unnecessary code space. -Steve ui, pretty please! I love it too, it would allow this marvelous construct! auto my_extended_alloca(size_t size, void* buf=alloca(size)) { return buf; }
Re: Optional parameters referring to previous parameters?
On Thursday, 10 May 2012 at 17:41:23 UTC, dennis luehring wrote: Am 10.05.2012 19:07, schrieb Tove: auto my_extended_alloca(size_t size, void* buf=alloca(size)) { return buf; } and whats the difference to? auto my_extended_alloca(size_t size, void* buf) { return alloca(size); } except that you hide the alloca in the interface which can be easily overwritten with malloc or something? auto x = my_extended_alloca( 10, malloc(100) ); ??? When used in the parameter list, the alloca() is injected into the parent scope. Your version doesn't work at all, as the allocation automatically ends with the scope of my_extended_alloca() instead of the scope of the caller!
Re: What library functionality would you most like to see in D?
On Sunday, 31 July 2011 at 14:10:12 UTC, Heywood Floyd wrote: - Incremental Garbage collector (for real-time apps) - More example code snippets in the docs (for all libs) The entire compiler as a library. :D
Re: Getting the const-correctness of Object sorted once and for all
On Monday, 14 May 2012 at 16:53:24 UTC, Timon Gehr wrote: On 05/14/2012 06:10 AM, Chris Cain wrote: On Monday, 14 May 2012 at 02:57:57 UTC, Mehrdad wrote: The problem is that it's unavoidable. i.e. you can't say "don't mark it as const if it isn't const", because, practically speaking, it's being forced onto the programmers by the language. You're really against const in this language, huh? I guess this is not the most important point. He has been trying to use const like in OO-ish C++. This just does not work, because D const is detrimental to OO principles when used that way. The proposal is about _enforcing_ C++-like usage of const. but c++ has the 'mutable' keyword as an easy escape route... which saved me a bunch of times... guess one can emulate it with a library-solution using nested classes? But... what about structs? class Outer { int i = 6; // mutable class Inner { int y=0; int foo() const { // ++y; // fail return ++i; // look ma, mutable const } } Inner inner; this() { inner = new Inner; } alias inner this; }
Re: logical const idea - scratchspace
On Monday, 14 May 2012 at 21:19:43 UTC, Steven Schveighoffer wrote: On Mon, 14 May 2012 17:11:14 -0400, Alex Rønne Petersen wrote: Another concern I have is that this couples a feature tightly to the implementation of the GC. What if another GC doesn't use the same allocation scheme? newScratchSpace uses GC.malloc to ensure the block is big enough. The GC must support returning a block of memory large enough to hold the requested bytes. It's not tightly coupled, even though it depends on the GC. -Steve It is an interesting idea..., but... we cannot assume that none of the current/future D compilers can make false 'alias assumptions' since it only sees const pointers? something similar to 'mutable' is needed... A way to work around it would be to use 'volatile this' access, that would kinda force the compiler to do the right thing when overriding const... but sadly it's deprecated, and as far as I know there is no alternative... btw how is that intended to work when using a pointer to a hardware register?
tuple of ranges - findSplit
I'm currently designing an interface, which conceptually is similar to findSplit... so I decided to peek at/learn from Phobos... "findSplit returns a tuple result containing three ranges" tuple(haystack[0 .. pos1],haystack[pos1 .. pos2], haystack[pos2 .. haystack.length]); As one easiliy can spot, pos1 and pos2 occurs twice... in isolated cases it doesn't matter, but in my case I was planning to generate a number of these. Hmmm... just wondering, did anyone already design/implement a pretty/efficient interface ontop of a structure similar to below? struct { uint r0; union { uint r1 uint r2 } union { uint r3 uint r4 } uint r5; }
Re: Destructor nonsense on dlang.org
On Thursday, 24 May 2012 at 15:43:57 UTC, Alex Rønne Petersen wrote: We just need a dispose pattern whereby explicit dispose() instructs the GC to not finalize. So I'm curious, what resource are we trying to free here? None. I just came across it in the docs and found it completely insane. Hmm... well, as long as it's optional behavior... as in my case I actually want to go in the opposite direction... short-lived tool which claims x resources and is run once for every file... So in this case, resources should be "free:ed" _unless_ it's at program termination... as then it just slows down the shutdown procedure, the OS reclaim it faster anyway.
Re: Destructor nonsense on dlang.org
On Thursday, 24 May 2012 at 17:06:19 UTC, ponce wrote: I really had a hard time to believe it when #D told me so, but there is no guaranteed order of destruction and as you cannot relies on members still being alive in a class destructor. All of it can happen when making absolutely no cycles in the object graph. What I do now is having a close function for each class which hold a non-memory resource. This is writtent in TDPL, but I wish I was told earlier :) http://dlang.org/class.html#destructors "This rule does not apply to auto objects or objects deleted with the DeleteExpression, as the destructor is not being run by the garbage collector, meaning all references are valid." i.e. non gc resources are fine... and it's also fine if you call clear()... it's only a problem if you rely on automatic collection and reference a object... so there's no need for close, as clear() will do the trick.
Re: Destructor nonsense on dlang.org
On Thursday, 24 May 2012 at 17:57:11 UTC, Steven Schveighoffer wrote: On Thu, 24 May 2012 13:47:31 -0400, Tove wrote: There's a big problem with this though. Your destructor *has no idea* whether it's being called from within a collection cycle, or from clear. You must assume the most restrictive environment, i.e. that the dtor is being called from the GC. This is even true with struct dtors! -Steve If there is a clear location where a manual close() function can be called... then there are many safe solutions to automatically and safely call clear instead. std.typecons.Unique If you are a library creator, you could even use a factory to enforce wrapping in Unique... But I don't see any point of adding a non standard destructor function name, there are numerous ways to facilitate RAII.
Re: Destructor nonsense on dlang.org
On Thursday, 24 May 2012 at 19:46:07 UTC, foobar wrote: Looks to me like an issue with separation of concerns. I think that dtors need to only provide deterministic management of resources and not affect GC algorithms: 1. classes should *not* have dtors at all. 2. struct values should *not* be gc managed [*]. Why not simply set "BlkAttr.NO_SCAN" on ourselves if we need certain resources in the destructor? Assuming we one day get user defined attributes, it can be make quite simple...
Re: Destructor nonsense on dlang.org
On Thursday, 24 May 2012 at 20:53:33 UTC, Tove wrote: On Thursday, 24 May 2012 at 19:46:07 UTC, foobar wrote: Looks to me like an issue with separation of concerns. I think that dtors need to only provide deterministic management of resources and not affect GC algorithms: 1. classes should *not* have dtors at all. 2. struct values should *not* be gc managed [*]. Why not simply set "BlkAttr.NO_SCAN" on ourselves if we need certain resources in the destructor? Assuming we one day get user defined attributes, it can be make quite simple... Tested my idea... unfortunately it's broken... GC.collect() while the program is running, is OK... so I was hoping to add: GC.disable() just before main() ends, but apparently this request is ignored. i.e. back to square 1, undefined collecting order once the program exits. import std.stdio; import core.memory; class Important { this() { us ~= this; } ~this() { writeln("2"); } private: static Important[] us; } class CollectMe { Important resource; this() { resource = new Important(); } ~this() { writeln("1"); clear(resource); } } void main() { GC.setAttr(cast(void*)new CollectMe(), GC.BlkAttr.NO_SCAN); GC.collect(); GC.disable(); writeln("3"); }
Re: Example of Rust code
On Friday, 10 August 2012 at 12:32:28 UTC, bearophile wrote: This second D version uses the same class definitions, but allocates the class instances on the stack. The code is bug prone and ugly. The other disadvantages are unchanged: void main() { import std.stdio; import std.conv: emplace; import core.stdc.stdlib: alloca; enum size_t size_Val = __traits(classInstanceSize, Val); enum size_t size_Plus = __traits(classInstanceSize, Plus); enum size_t size_Minus = __traits(classInstanceSize, Minus); Val e1 = emplace!Val(alloca(size_Val)[0 .. size_Val], 5); Val e2 = emplace!Val(alloca(size_Val)[0 .. size_Val], 3); Val e3 = emplace!Val(alloca(size_Val)[0 .. size_Val], 1); Plus e4 = emplace!Plus(alloca(size_Plus)[0 .. size_Plus], e2, e3); Minus ex2 = emplace!Minus(alloca(size_Minus)[0 .. size_Minus], e1, e4); writeln("Val: ", eval(ex2)); } Probably there are ways to improve my D versions, or to write better versions. Bye, bearophile I think version 2 would be the easiest one to improve, by including a combined emplace/alloca convenience function in Phobos for this common use-case. See the technique used in: http://www.digitalmars.com/d/archives/digitalmars/D/run-time_stack-based_allocation_166305.html "auto Create(void* buf=alloca(frame_size))"
Re: Formatted read consumes input
On Friday, 24 August 2012 at 11:18:55 UTC, Dmitry Olshansky wrote: C's scanf is a poor argument as it uses pointers instead of ref (and it can't do ref as there is no ref in C :) ). Yet it doesn't allow to read things in a couple of calls AFAIK. In C scanf returns number of arguments successfully read not bytes so there is no way to continue from where it stopped. BTW it's not documented what formattedRead returns ... just ouch. Actually... look up "%n" in sscanf it's wonderful, I use it all the time.
Re: param2 = param1
On Tuesday, 27 August 2013 at 21:21:31 UTC, Timon Gehr wrote: - Safe alloca wrapper using the alloca default argument hack together with this. (i.e. bearophile's dynamically-sized strongly typed stack-based arrays.) Oh Yes please! I've been waiting for this for a long time, there even was an enhancement request written to facilitate the alloca default argument hack! http://d.puremagic.com/issues/show_bug.cgi?id=8075
Re: new DIP47: Outlining member functions of aggregates
On Sunday, 8 September 2013 at 09:24:52 UTC, Michael wrote: On Sunday, 8 September 2013 at 09:15:52 UTC, Namespace wrote: I'm against it. More important than such a gimmick are the many open bugs, auto ref, AA, scope, etc. And don't forget the implementation of the virtual keyword. +1 I strongly dislike DIP47, I found many unintended discrepancies in our C code-base at work... precisely because of "lax rules", even cases with wrong linkage as result! "Parameter names need not match." "If there is a default parameter value, it may only appear in the member function declaration." This forces indexing of source and jump to declaration features in the IDE, the current way is more friendly to simpler text-editors, the problem which DIP47 is trying to solve is anyway solved by IDE:s "Class View" feature etc. i.e. For people using IDE:s(class view, or ddoc) nothing changes with DIP47. For people using plain editors, DIP47 makes it worse. Even if DIP47 is implemented, I hope this feature is strongly discouraged in the standard library.
Re: new DIP47: Outlining member functions of aggregates
Wouldn't this style be an acceptable compromise instead? with both declaration and definition 100% identical. struct S { // member function declarations static int mfunc1(int a, int b = 5) pure; static int mfunc2(int a, int b = 5) pure; static int mfunc3(int a, int b = 5) pure; // member function definitions static int mfunc1(int a, int b = 5) pure { } static int mfunc2(int a, int b = 5) pure { } static int mfunc3(int a, int b = 5) pure { } }
Re: new DIP48: Interface specifications for aggregate types
On Sunday, 8 September 2013 at 18:13:52 UTC, Simen Kjaeraas wrote: In response to Walter's DIP47 I have created my own take on what I see as the main problem: http://wiki.dlang.org/DIP48 Destroy! I like it but would prefer @interface instead of interface Since using interface in this way, reminds me too much of nameless struct:s/union:s.
Re: std.d.lexer: pre-voting review / discussion
On Wednesday, 11 September 2013 at 15:02:00 UTC, Dicebot wrote: std.d.lexer is standard module for lexing D code, written by Brian Schott I remember reading there were some interesting hash-advances in dmd recently. http://forum.dlang.org/thread/kq7ov0$2o8n$1...@digitalmars.com?page=1 maybe it's worth benchmarking those hashes for std.d.lexer as well.
Re: Debug information for enumerator values
On Tuesday, 17 September 2013 at 09:52:37 UTC, Iain Buclaw wrote: (gdb) print ('test.enum_ulong')3 $11 = (test.enum_ulong.kE2 | test.enum_ulong.kE3) (gdb) print ('test.enum_ulong')2 $12 = test.enum_ulong.kE3 What do you think? Is .. too verbose, or just right? :-) Regards Iain Kickass! I think it's "just right"... _BUT_ in case of multiple values, I would prefer something like this: $11 = test.enum_ulong(kE2 | kE3)
link-time codegen assert?
A very minimal example: template ctfe(alias any) { alias ctfe = any; } double ms(double val) pure nothrow @property { if(__ctfe) return val / 1000.0; else assert(false); } The above allows for writing... ctfe!(10.ms) ... which is in natural reading order as opposed to... ms!10 ... but one would have to guard against users accidentally writing... 10.ms ... which would be a catastrophic hidden performance bug. I was not able to find a way to use static assert only when there is code-generated for the function, so my question is. Do you see a generic need for a 3rd type of assert or can you find a way to solve the above issue in another way?
Re: link-time codegen assert?
On Tuesday, 1 October 2013 at 11:42:20 UTC, Tove wrote: A very minimal example: template ctfe(alias any) { alias ctfe = any; } double ms(double val) pure nothrow @property { if(__ctfe) return val / 1000.0; else assert(false); } Turns out it was quite easy to solve... void link_assert() pure nothrow; link_assert();
Re: std.d.lexer : voting thread
On Thursday, 3 October 2013 at 11:04:26 UTC, Dicebot wrote: Yes. ( I have not found any rules that prohibit review manager from voting :) ) I'd love to say yes, since I've been dreaming of the day when we finally have a lexer... but I decided to put my yes under the condition that it can lex itself using ctfe. My first attempt with adding a "import(__FILE__)" unittest failed with v2.063.2: Error: memcpy cannot be interpreted at compile time, because it has no available source code lexer.d(1966): called from here: move(lex) lexer.d(454):called from here: r.this(lexerSource(range), config) Maybe this is this fixed in HEAD though?
Re: More on C++ stack arrays
On Sunday, 20 October 2013 at 19:42:29 UTC, Walter Bright wrote: On 10/20/2013 12:23 PM, bearophile wrote: Walter Bright: No. But I do know that alloca() causes pessimizations in the code generation, and it costs many instructions to execute. Allocating fixed size things on the stack executes zero instructions. 1) Alloca allows allocating in the parent context, which is guaranteed to elide copying, without relying on a "sufficiently smart compiler". ref E stalloc(E)(ref E mem = *(cast(E*)alloca(E.sizeof))) { return mem; } 2) If only accessing the previous function parameter was supported(which is just an arbitrary restriction), it would be sufficient to create a helper-function to implement VLA:s. 3) Your "fixed size stack allocation" could be combined with alloca also, in which case it likely would be faster still.
Re: More on C++ stack arrays
On Monday, 21 October 2013 at 01:48:56 UTC, Walter Bright wrote: On 10/20/2013 5:59 PM, Jonathan M Davis wrote: If that paradigm is frequent enough, it might be worth wrapping it in a struct. Then, you'd probably get something like StaticArray!(int, 10) tmp(n); int[] a = tmp[]; which used T[10] if n was 10 or less and allocated T[] otherwise. The destructor could then deal with freeing the memory. Sounds like a good idea - and it should fit in with Andrei's nascent allocator design. Hmmm, it gave me a weird idea... void smalloc(T)(ushort n, void function(T[]) statement) { if(n <= 256) { if(n <= 16) { T[16] buf = void; statement(buf[0..n]); } else { T[256] buf = void; statement(buf[0..n]); } } else { if(n <= 4096) { T[4096] buf = void; statement(buf[0..n]); } else { T[65536] buf = void; statement(buf[0..n]); } } } smalloc(256, (int[] buf) { });
Re: 1 matches bool, 2 matches long
On Friday, 26 April 2013 at 21:01:17 UTC, Brian Schott wrote: On Friday, 26 April 2013 at 06:01:27 UTC, Walter Bright wrote: The real issue is do you want to have the implicit conversions: 0 => false 1 => true or would you require a cast? The idea of a "true number" and a "false number" doesn't make sense, so yes. I find the current implementation perfectly intuitive and I wouldn´t want it any other way... it models the underlying hardware, just the way it should be. Sometimes due to bad coding standards I´m forced to write... if((.long expression with not immediately apparent operator precedence)!=0) ... absolutely appalling, kills readability with extra () etc. doesn´t matter how many years, I was forced to do it, I still cringe every time I see a line like that and itch to rewrite it more readable. I also dont know any book(including Knuth), nor online article, which doesn´t clearly define it as 0,1... am very confused by the reactions in this thread, is my background so different from everyone elses?
Re: Rvalue references - The resolution
On Saturday, 4 May 2013 at 18:33:04 UTC, Walter Bright wrote: Runtime Detection There are still a few cases that the compiler cannot statically detect. For these a runtime check is inserted, which compares the returned ref pointer to see if it lies within the stack frame of the exiting function, and if it does, halts the program. The cost will be a couple of CMP instructions and an LEA. These checks would be omitted if the -noboundscheck compiler switch was provided. Thanks for taking the time to detail the solution, I was quite curious. Runtime Detection and opt-out with "-noboundscheck" is a stroke of genius! "couple of CMP instructions" should be possible to reduce to only one with the "normal" unsigned range check idiom, no? Looking forwards to hear more cool news. :)
Re: Rvalue references - The resolution
On Sunday, 5 May 2013 at 07:22:06 UTC, Jonathan M Davis wrote: Now, I argued that pure's primary benefit isn't really in optimizations but rather in the fact that it guarantees that your code isn't accessing global state, but there's still the general concern that there's a lot of new attributes to worry about, whether you choose to use them or not. I don't think that it was a deal-breaker for Don or anything like that, but it was one of his concerns and one more item on the list of things that makes it more costly for them to move to D2, even if it alone doesn't necessarily add a huge cost. - Jonathan M Davis Assuming: 1. functioning attribute inference 2. attributes are expanded in the *.di file Then, it would be trivial to create a tool which, upon request, merges "a defined set of attributes" back to the original d source file, this would reduce some of the burden and with full IDE integration even more so.
Re: I want to add a Phobos module with template mixins for common idioms.
On Friday, 10 May 2013 at 21:04:32 UTC, Idan Arye wrote: On Wednesday, 8 May 2013 at 20:11:34 UTC, Idan Arye wrote: OK, so I'm gonna go ahead and implement it, so I can show by example that the string solution can be typesafe, scalable and elegant. OK, this is a basic implementation: https://gist.github.com/someboddy/5557358 Before I can make the pull request, I still need to do documentation, add some asserts to make sure users don't declare methods or subtypes in the property declarations string, add some more unit tests, and add the other idiom(the singleton). But, it's still enough for demonstrating that strings are not evil, and that their usage here does not brake type safety, scope, or anything else. kickass technique, hope this get included soon, keep up the good work!
Re: I was wrong
On Friday, 31 May 2013 at 08:45:08 UTC, Dicebot wrote: On Thursday, 30 May 2013 at 18:06:03 UTC, Walter Bright wrote: about the changelog. Andrej Mitrovic has done a super awesome job with the changelog, and it is paying off big time. I am very happy to be proven wrong about it. It is so good I could not have even expected to see something like that. Andrej, awesome! I agree the changelog is awesometastic, but the run button could need some tweaks...
Re: The state of core.simd
On Saturday, 1 June 2013 at 10:18:27 UTC, Benjamin Thaut wrote: I've taken a look at core.simd and I have to say is unuseable. In a very small test program I already found 3 bugs 1) Using debug symbols together with core.simd will cause a ICE http://d.puremagic.com/issues/show_bug.cgi?id=10224 2) The STOUPS instruction is not correctly implemented: http://d.puremagic.com/issues/show_bug.cgi?id=10225 3) The XMM register allocation is catastrophic: http://d.puremagic.com/issues/show_bug.cgi?id=10226 Whats the current state of core.simd? Is it still beeing worked on? Because it its current state its pretty much unuseable. Kind Regards Benjamin Thaut does this generate better code? float4 v = __vector([1.0f, 2.0f, 3.0f, 4.0f]);
Re: The state of core.simd
On Saturday, 1 June 2013 at 10:57:03 UTC, Benjamin Thaut wrote: Am 01.06.2013 12:52, schrieb Tove: does this generate better code? float4 v = __vector([1.0f, 2.0f, 3.0f, 4.0f]); That doesn't even compile. You can try it out yourself using: http://dpaste.dzfl.pl/ Kind Regards Benjamin Thaut OK, sorry about that... this compiles, but the 'Disassembly' button is not functional for me... http://dpaste.dzfl.pl/1e0407c3
Re: xdc: A hypothetical D cross-compiler and AST manipulation tool.
On Friday, 19 July 2013 at 13:38:12 UTC, Chad Joan wrote: Even with a conservative target like C89-only, there are still an incredibly large number of extremely useful D features (OOP, templates, scope(exit), CTFE, mixins, ranges, op-overloading, etc) that DO come for free. I love the idea behind xdc, but I would go with C99 instead, even MS as the last vendor(?), with VS2013 now finally committed to supporting C99, variable length arrays alone would make it worth it.
Re: std.serialization: pre-voting review / discussion
On Wednesday, 14 August 2013 at 08:48:23 UTC, Jacob Carlborg wrote: On 2013-08-14 10:19, Tyler Jameson Little wrote: - Typo: NonSerialized example should read NonSerialized!(b) No, it's not a typo. If you read the documentation you'll see that: "If no fields or "this" is specified, it indicates that the whole class/struct should not be (de)serialized." I understand the need for Orange to be backwards compatible, but for std.serialization, why isn't the old-style mixin simply removed in favor of the UDA. Furthermore for "template NonSerialized(Fields...)" there is an example, while for the new style "struct nonSerialized;" there isn't! I find the newstyle both more intuitive and you also more dry not duplicating the identifier: "int b; mixin NonSerialized!(b)" @nonSerialized struct Foo { int a; int b; int c; } struct Bar { int a; int b; @nonSerialized int c; }
Re: another cool RTInfo trick - i want in runtime
On Thursday, 16 January 2014 at 15:57:05 UTC, Adam D. Ruppe wrote: S yeah, this is pretty much a pure win all around to my eyes. Am I blind to the suck? Wow, great trick! I'd love to see this merged.
Re: Disallow null references in safe code?
On Sunday, 2 February 2014 at 09:56:06 UTC, Marc Schütz wrote: auto x = *p; if(!p) { do_something(x); } In the first step, the if-block will be removed, because its condition is "known" to be false. After that, the value stored into x is unused, and the dereference can get removed too. With a good static analyzer, such as coverity, this program would be rejected anyway with "check_after_deref", if the compiler is smart enough to do the optimization, it could be smart enough to issue a warning as well!
Re: D as A Better C?
On Tuesday, 11 February 2014 at 19:43:00 UTC, Walter Bright wrote: I've toyed with this idea for a while, and wondered what the interest there is in something like this. The idea is to be able to use a subset of D that does not require any of druntime or phobos - it can be linked merely with the C standard library. To that end, there'd be a compiler switch (-betterC) which would enforce the subset. (First off, I hate the name "better C", any suggestions?) The subset would disallow use of any features that rely on: 1. moduleinfo 2. exception handling 3. gc 4. Object I've used such a subset before when bringing D up on a new platform, as the new platform didn't have a working phobos. What do you think? It's a Delightful(dlite?) idea, I long considered doing something like this as it would facilitate using D at work.
Re: D as A Better C?
On Wednesday, 12 February 2014 at 20:10:42 UTC, Jacob Carlborg wrote: (First off, I hate the name "better C", any suggestions?) -no-runtime good choice and even if Walter is blocked on higher prio issues, we can still make it happen as a community.
Re: Two Questions
On Wednesday, 12 February 2014 at 20:23:55 UTC, Kagamin wrote: On Sunday, 9 February 2014 at 21:12:57 UTC, Jonathan M Davis wrote: And you get more memory out of the deal even if you have as little as 4GB in the box. I wish that everything would move to 64-bit so that we wouldn't have to even worry about 32-bit anymore. What's the advantage of having 64-bit OS on 4gb RAM? x32 is the "obvious" solution, best of both worlds: http://en.wikipedia.org/wiki/X32_ABI ... I really wonder why it has not yet gone mainstream.
Re: More Illuminating Introductory Code Example on dlang.org
On Wednesday, 12 February 2014 at 20:49:54 UTC, Nordlöw wrote: I believe the first code example a newbie sees when he/she first visits dlang.org should be some variation of Walter's showcase on Component Programming including all the bells and whistles of lazy evaluted ranges. IMHO, this would increase the probability of the newbie staying a bit further on the site trying to figure out the details of what make this intriguing D code example tick. And, as a result, be more convinced about D's unique and powerful features. What do you think, fellow D programmers? Absolutely and there is a process in place for this already, feel free to suggest something: "[your code here] Got a brief example illustrating D? Submit your code to the digitalmars.D forum specifying "[your code here]" in the title. Upon approval it will be showcased on a random schedule on D's homepage."
Re: DIP56 Provide pragma to control function inlining
On Sunday, 23 February 2014 at 12:07:40 UTC, Walter Bright wrote: http://wiki.dlang.org/DIP56 Manu has needed always inlining, and I've needed never inlining. This DIP proposes a simple solution. yay, all for it! The DIP should probably specify what happens if inlining fails, i.e. generate a compilation error. Could we consider adding "flatten" in the same dip? quote from gcc "Flatten Generally, inlining into a function is limited. For a function marked with this attribute, every call inside this function is inlined, if possible. Whether the function itself is considered for inlining depends on its size and the current inlining parameters. "
Re: DIP56 Provide pragma to control function inlining
On Sunday, 23 February 2014 at 12:57:00 UTC, Walter Bright wrote: On 2/23/2014 4:25 AM, Tove wrote: The DIP should probably specify what happens if inlining fails, i.e. generate a compilation error. I suspect that may cause problems, because different compilers will have different inlining capabilities. I think it should be a 'recommendation' to the compiler. Would assert be feasible or difficult to implement with the current compiler design? static assert(pragma(inline, true));
Re: DIP56 Provide pragma to control function inlining
On Sunday, 23 February 2014 at 21:53:43 UTC, Walter Bright wrote: I'm aware of that, but once you add the: version(BadCompiler) { } else pragma(inline, true); things will never get better for BadCompiler. And besides, that line looks awful. If I need to support multiple compilers and if one of them is not good enough, I would first try to figure out which statement causes it to fail, if left with no other alternatives: Manually inline it in the common path for all compilers, _not_ create version blocks. Inspecting asm output doesn't scale well to huge projects. Imagine simply updating the existing codebase to use a new compiler version. Based on my experience, even if we are profiling and benchmarking a lot and have many performance based KPI:s, they will still never be as fine-grained as the functional test coverage. Also not forgetting, some performance issues may only be detected in live usage scenarios on the other side of the earth as the developers doesn't even have access to the needed environment(only imperfect simulations), in those scenarios you are quite grateful for every static compilation error/warning you can get... You are right in that there is nothing special about inlining, but I'd rather add warnings for all other failed optimisation opportunities than not to warn about failed inlining. RVCT for instance has --diag_warning=optimizations, which gives many helpful hints, such as alias issues: please add "restrict", or possible alignment issues etc.
Accelerating "domain-specific languages" in CTFE
Projects such as Pegged and our CTFE regex engine often serve as poster-children of what is possible in D and many agree they are among the more important projects. I was thinking, after std.lexer is accepted, we have a stable interface, but no matter how great the code is and even if it beats the already superlative DMD lexer, it will _NOT_ be fast during CTFE. It is also often stressed in this very forum that it's paramount for a lexer to be beyond fast. For the above reasons, I propose that the compiler would offer an interface to retrieve an already lexed buffer, similar in concept to the existing token string q{...} "Token strings open with the characters q{ and close with the token }. In between must be valid D tokens." By definition they contain only valid tokens... the compiler would only have to create a range compatible with std.lexer...