Re: Unencumbered V0.1.2: Write Cucumber step definitions in D
On Thursday, 24 April 2014 at 18:53:22 UTC, Jacob Carlborg wrote: On 2014-04-23 15:24, Atila Neves wrote: Like testing with Cucumber? Wish you could call native D code with it? Now you can! http://code.dlang.org/packages/unencumbered https://github.com/atilaneves/unencumbered I especially like registering functions that take the parameters with the types they need from the regexp captures, as well as the compile-time failures that come from that if done incorrectly. Now I just need to use in real life. BTW, why is the description passed as a template argument to the Cucumber keywords. @Given!(foo) instead of @Given(foo)? Finally got around to it and now it's @Given(foo) like it should've been. Bumped the version up to v0.2.0. Atila
Re: Parallel execution of unittests
The D unittest feature has been a mixed bag from the beginning for me. When a codebase starts to consider to parallelize the unittests it's has in many cases become a very expensive to make this change. If order of execution was not guaranteed this would force coders to make a better long term investment from the beginning. A nice side effect from having undefined order is that programmers are forced to think about state. See http://googletesting.blogspot.se/2013/03/testing-on-toilet-testing-state-vs.html Another link I would like to drop here which is only midly relevant. I whish that more developers became aware of the Tests Vs. checks discussion. http://www.satisfice.com/blog/archives/856
Re: More radical ideas about gc and reference counting
On Thursday, 1 May 2014 at 02:36:23 UTC, Walter Bright wrote: On 4/30/2014 6:50 PM, deadalnix wrote: On Thursday, 1 May 2014 at 01:20:37 UTC, Walter Bright wrote: A link to your previous comment would be useful. I've written several proposal regarding this. Please at least read them as what you just wrote only proves you are not well informed. Notably, the proposal do not require annotations from the users to do the kind of things are are currently being special cased. A link would be nice. http://forum.dlang.org/thread/yiwcgyfzfbkzcavuq...@forum.dlang.org Just made a new topic with code sample, so the idea can be grasped more easily (or so I hope).
Isolated by example
First the original post I made on this forum : http://forum.dlang.org/thread/kluaojijixhwigouj...@forum.dlang.org#post-kluaojijixhwigoujeip:40forum.dlang.org Now some sample code/explanation to get it better. Isolated in a proposal adapted from an experiment made in C# for D. It introduces a new qualifier. The qualifier is necessary on function signatures (inference is possible to some extent, like for pure functions), object/struct fields and globals. It is inferred on local variables (but can be explicited if one wish). An isolated is in an 'island'. The 'island' is implicit and tracked by the compiler. We have one immutable and one shared island. We have one TL island per thread. And we can have an infinity of isolated island. An isolated is consumed when: - it is returned - it is passed as argument - it is assigned to another island When an isolated goes out of scope without being consumed, the compiler can free the whole island: void foo() { A a = new A(); // a is isolated if A's ctor allows. // a is not consumed. The compiler can insert memory freeing. } As we can see, it allows the compiler to do some freeing for us to reduce GC pressure. Manu why aren't you already here cheering ? When an isolated is consumed, the island it is in is merged into the island that consumes it. All reference to the island become write only until next write. void foo() { A a = new A(); // a is isolated immutable b = a; // a's island is merged into immutable // a's island has been consumed. a is not readable at this point. // a.foo() // Error a = new A(); // OK a.foo(); // OK, we have written in a. } So far, we saw that isolated helps to construct const/immutable/shared objects and can be used by the compiler to insert free. isolated also help to bridge the RC world and the GC world. struct RC(T) is(isReferenceType!T) { private T mystuff; this(isolated T stuff) { mystuff = stuff; } // All code to do ref counting goes here... } Here, the RC struct must be constructed with an isolated. To do so, the isolated have to be passed as argument: it is consumed. As a result, the RC struct can be sure that it has the only usable reference to stuff that is around. Now some concurrency goodies: void foo() { auto tid = spawn(spawnedFunc, thisTid); A a = new A(); send(tid, a); // OK, a is an isolated. // a can't be used here anymore as it is consumed. // You can simply pass isolated around across thread safely. } Now that is pretty sweet. First we don't have to do the crazy and unsafe dance of cast to shared and cast back to me and this is actually safe. Go's type system do is not safe across channel, that puts us ahead in one of the things go does best. std.parallelism can also benefit from this. There is also several benefit when the optimizer knows about isolated (different island do not alias each other). I hope the idea get across better with some sample code and will be considered. As sample code shows, isolated do not need to be specified explicitly often. User not annotating can get a lot of benefit out of the concept right away.
Re: Parallel execution of unittests
On Friday, 2 May 2014 at 04:28:26 UTC, Ola Fosheim Grøstad wrote: On Friday, 2 May 2014 at 03:04:39 UTC, Jason Spencer wrote: If we don't want to consider how we can accommodate both camps here, then I must at least support Jonathan's modest suggestion that parallel UTs require active engagement rather than being the default. Use chroot() and fork(). Solves all problems. You know, executing batches of tests in multiple processes could be a good compromise. You might still run into filesystem issues, but if you run a series of tests with a number of processes at the same time, you can at least guarantee that you won't run into shared memory issues.
Re: More radical ideas about gc and reference counting
On Thursday, 1 May 2014 at 20:03:03 UTC, Andrei Alexandrescu wrote: On 5/1/14, 12:52 PM, Nordlöw wrote: into a class. I'm inclined to say that we should outright prohibit that, That can't happen. Why is that? (1) Too much breakage, (2) would disallow a ton of correct code, (3) no reasonable alternative to propose. We'd essentially hang our users out to dry. -- Andrei (1) is made of turbo lol. A huge amount of binding to C++ lib rely on destructor. The breakage in the proposal is already massive. For instance, GtkD won't work. And other example have been presented.
Re: More radical ideas about gc and reference counting
On Friday, 2 May 2014 at 00:45:42 UTC, Andrei Alexandrescu wrote: Here's where the point derails. A struct may be preexisting; the decision to define a destructor for it and the decision to use polymorphism for an object that needs that structure are most of the time distinct. Andrei Sheep eat grass. Boat float on water. These are completely distinct. Therefore, there won't be any issue if we ditch all the grass in the sea from that boat the carry sheeps around.
Re: Isolated by example
On Friday, 2 May 2014 at 06:51:49 UTC, deadalnix wrote: First the original post I made on this forum : http://forum.dlang.org/thread/kluaojijixhwigouj...@forum.dlang.org#post-kluaojijixhwigoujeip:40forum.dlang.org Now some sample code/explanation to get it better. Isolated in a proposal adapted from an experiment made in C# for D. It introduces a new qualifier. The qualifier is necessary on function signatures (inference is possible to some extent, like for pure functions), object/struct fields and globals. It is inferred on local variables (but can be explicited if one wish). An isolated is in an 'island'. The 'island' is implicit and tracked by the compiler. We have one immutable and one shared island. We have one TL island per thread. And we can have an infinity of isolated island. An isolated is consumed when: - it is returned - it is passed as argument - it is assigned to another island When an isolated goes out of scope without being consumed, the compiler can free the whole island: void foo() { A a = new A(); // a is isolated if A's ctor allows. // a is not consumed. The compiler can insert memory freeing. } As we can see, it allows the compiler to do some freeing for us to reduce GC pressure. Manu why aren't you already here cheering ? When an isolated is consumed, the island it is in is merged into the island that consumes it. All reference to the island become write only until next write. void foo() { A a = new A(); // a is isolated immutable b = a; // a's island is merged into immutable // a's island has been consumed. a is not readable at this point. // a.foo() // Error a = new A(); // OK a.foo(); // OK, we have written in a. } So far, we saw that isolated helps to construct const/immutable/shared objects and can be used by the compiler to insert free. isolated also help to bridge the RC world and the GC world. struct RC(T) is(isReferenceType!T) { private T mystuff; this(isolated T stuff) { mystuff = stuff; } // All code to do ref counting goes here... } Here, the RC struct must be constructed with an isolated. To do so, the isolated have to be passed as argument: it is consumed. As a result, the RC struct can be sure that it has the only usable reference to stuff that is around. Now some concurrency goodies: void foo() { auto tid = spawn(spawnedFunc, thisTid); A a = new A(); send(tid, a); // OK, a is an isolated. // a can't be used here anymore as it is consumed. // You can simply pass isolated around across thread safely. } Now that is pretty sweet. First we don't have to do the crazy and unsafe dance of cast to shared and cast back to me and this is actually safe. Go's type system do is not safe across channel, that puts us ahead in one of the things go does best. std.parallelism can also benefit from this. There is also several benefit when the optimizer knows about isolated (different island do not alias each other). I hope the idea get across better with some sample code and will be considered. As sample code shows, isolated do not need to be specified explicitly often. User not annotating can get a lot of benefit out of the concept right away. I've been thinking about something similar as being an 'obvious' idea. It to me seems rather a good idea.
Re: DIP61: redone to do extern(C++,N) syntax
On Friday, 2 May 2014 at 00:22:14 UTC, deadalnix wrote: 2. Creating a new name lookup mechanism is the kind of idea that sound good but ends up horribly backfiring. There is all kind of implications and it affect every single identifier resolution. You don't want to mess with that (especially since it is already quite badly defined in the first place). What implications? The implications with this DIP is that all library authors will have to follow a convention of having all C++ dependencies in a module named cpp in order to have a fake way of specifying fully qualified C++ names. Then lobby for coercing C++ types that have different paths. This is not elegant. It is a hack.
Re: DIP61: redone to do extern(C++,N) syntax
On 5/2/2014 12:34 AM, Ola Fosheim Grøstad ola.fosheim.grostad+dl...@gmail.com wrote: The implications with this DIP is that all library authors will have to follow a convention of having all C++ dependencies in a module named cpp in order to have a fake way of specifying fully qualified C++ names. Not at all, any more than you have to do that for C names. This is not elegant. It is a hack. C++ is not elegant, and interfacing to it will necessarily pick up some of that.
Re: More radical ideas about gc and reference counting
On Thursday, 1 May 2014 at 21:29:19 UTC, Andrei Alexandrescu wrote: On 5/1/14, 1:19 PM, H. S. Teoh via Digitalmars-d wrote: On Thu, May 01, 2014 at 01:03:06PM -0700, Andrei Alexandrescu via Digitalmars-d wrote: On 5/1/14, 12:52 PM, Nordlöw wrote: into a class. I'm inclined to say that we should outright prohibit that, That can't happen. Why is that? (1) Too much breakage, (2) would disallow a ton of correct code, (3) no reasonable alternative to propose. We'd essentially hang our users out to dry. -- Andrei Isn't this what we're already doing by (eventually) getting rid of class dtors? Not even close. (1) A lot less breakage, (2) disallowed code was already not guaranteed to work, (3) reasonable alternatives exist. Andrei I have 165k lines of code to review for that change... I would not call it a minor breakage... /Paolo
Re: Isolated by example
On 5/1/2014 11:51 PM, deadalnix wrote: First the original post I made on this forum : http://forum.dlang.org/thread/kluaojijixhwigouj...@forum.dlang.org#post-kluaojijixhwigoujeip:40forum.dlang.org It's nearly the same as http://wiki.dlang.org/DIP29 except that DIP29 tries to use a library type to take the role of 'isolated' in your proposal.
Re: Isolated by example
On Friday, 2 May 2014 at 09:05:13 UTC, Walter Bright wrote: On 5/1/2014 11:51 PM, deadalnix wrote: First the original post I made on this forum : http://forum.dlang.org/thread/kluaojijixhwigouj...@forum.dlang.org#post-kluaojijixhwigoujeip:40forum.dlang.org It's nearly the same as http://wiki.dlang.org/DIP29 except that DIP29 tries to use a library type to take the role of 'isolated' in your proposal. DIP29 is not safe, does not help to construct immutables, is not inferred, do not provide aliasing infos for the optimizer, etc...
Re: DIP61: redone to do extern(C++,N) syntax
On Friday, 2 May 2014 at 07:44:50 UTC, Walter Bright wrote: Not at all, any more than you have to do that for C names. The difference is that C names tend to have their namespace embedded: framework_structname_function()
Re: Isolated by example
On 5/2/2014 2:24 AM, deadalnix wrote: DIP29 is not safe, How so? does not help to construct immutables, immutable p = new int; works. is not inferred, I'm surprised you'd say that, most of it is about inferring uniqueness. do not provide aliasing infos for the optimizer, That's right. etc... ?
Re: Isolated by example
I like it a lot, as it solves several problems elegantly! Some comments inline... On Friday, 2 May 2014 at 06:51:49 UTC, deadalnix wrote: An isolated is consumed when: - it is returned - it is passed as argument - it is assigned to another island Assignment and passing to a scope variable can be exempt from this rule. When an isolated goes out of scope without being consumed, the compiler can free the whole island: void foo() { A a = new A(); // a is isolated if A's ctor allows. I guess the condition is that assignment to isolated is allowed only from a unique expression. Thanks to Walter's recent work, this is now inferred in many cases. But I guess in cases where it cannot be inferred (.di files come to mind), it needs to be annotated explicitly: class A { this() isolated; } // a is not consumed. The compiler can insert memory freeing. } As we can see, it allows the compiler to do some freeing for us to reduce GC pressure. Manu why aren't you already here cheering ? To make this more useful, turn it into a requirement. It gets us deterministic destruction for reference types. Example: ... isolated tmp = new Tempfile(); // use tmp ... // tmp is guaranteed to get cleaned up } When an isolated is consumed, the island it is in is merged into the island that consumes it. All reference to the island become write only until next write. void foo() { A a = new A(); // a is isolated immutable b = a; // a's island is merged into immutable // a's island has been consumed. a is not readable at this point. // a.foo() // Error a = new A(); // OK a.foo(); // OK, we have written in a. } This needs more elaboration. The problem is control flow: isolated a = new A(); if(...) { immutable b = a; ... } a.foo(); // -- ??? (Similar for loops and gotos.) There are several possibilities: 1) isolateds must be consumed either in every branch or in no branch, and this is statically enforced by the compiler. 2) It's just forbidden, but the compiler doesn't guarantee it except where it can. 3) The compiler inserts a hidden variable to track the status of the isolated, and asserts if it is used while it's in an invalid state. This can be elided if it can be proven to be unnecessary. I would prefer 3), as it is the most flexible. I also believe a similar runtime check is done in other situations (to guard against return of locals in @safe code, IIRC). I hope the idea get across better with some sample code and will be considered. As sample code shows, isolated do not need to be specified explicitly often. User not annotating can get a lot of benefit out of the concept right away. That's true, but it is also a breaking change, because then suddenly some variables aren't writable anymore (or alternatively, the compiler would have to analyse all future uses of the variable first to see whether it can be inferred isolated, if that's even possible in the general case). I believe it's fine if explicit annotation is required.
Re: More radical ideas about gc and reference counting
On Thursday, 1 May 2014 at 22:23:46 UTC, H. S. Teoh via Digitalmars-d wrote: On Thu, May 01, 2014 at 03:10:04PM -0700, Walter Bright via Digitalmars-d wrote: The thing is, GC is a terrible and unreliable method of managing non-memory resource lifetimes. Destructors for GC objects are not guaranteed to ever run. If you do have struct with a destructor as a field in a class, you've got, at minimum, suspicious code and a latent bug. Exactly!!! This is why I said we should ban the use of structs with dtors as a field in a class. No, not in a class, but in any GC-managed object. It's unfortunate that class currently implies GC.
Re: Parallel execution of unittests
On Friday, 2 May 2014 at 06:57:46 UTC, w0rp wrote: You know, executing batches of tests in multiple processes could be a good compromise. You might still run into filesystem issues, but if you run a series of tests with a number of processes at the same time, you can at least guarantee that you won't run into shared memory issues. Using fork() would be good for multi-threaded unit testing or when testing global datastructures (singeltons). I don't get the desire for demanding that unit tests are pure. That would miss the units that are most likely to blow up in an application. If you fork before opening any files it probably will work out ok.
Re: Parallel execution of unittests
On Thursday, 1 May 2014 at 18:38:15 UTC, w0rp wrote: On Thursday, 1 May 2014 at 17:04:53 UTC, Xavier Bigand wrote: Le 01/05/2014 16:01, Atila Neves a écrit : On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote: On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote: On 2014-04-30 23:35, Andrei Alexandrescu wrote: Agreed. I think we should look into parallelizing all unittests. -- Andrei I recommend running the tests in random order as well. This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time. They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that. Atila +1 Tests shouldn't be run in a random order all of the time, perhaps once in a while, manually. Having continuous integration randomly report build failures is crap. Either you should always see a build failure, or you shouldn't see it. You can only test things which are deterministic, at least as far as what you observe. Running tests in a random order should be something you do manually, only when you have some ability to figure out why the tests just failed. In my experience when a test fails randomly because of ordering, a while loop on the shell running until failure is enough to reproduce it in a few seconds. But as others have mentioned, being able to use a seed to reproduce it exactly is superior. Atila
Re: DIP(?) Warning to facilitate porting to other archs
On Thursday, 1 May 2014 at 11:17:10 UTC, Temtaime wrote: Hi everyone. I think it's need to have -w64(or other name, offers ?) flag that warns if code may not compile on other archs. Example: size_t a; uint b = a; // ok on 32 without a warning but fail on 64 with error And on 32 with -w64 it'll be : Warning : size_t.sizeof may be greater than 32 bit What you thinks ? Should i create proposal or nobody cares about porting and it's useless ? Any ideas are welcome. +1. Lost count how many times my Linux code wouldn't compile on Windows because of this. Atila
Re: Parallel execution of unittests
Walter Bright: You've already got it working with version, that's what version is for. Why add yet another way to do it? Because I'd like something better. It's an idiom that I have used many times (around 15-20 times). I'd like the compiler (or build tool) to avoid me to specify two times what the main module is. Also, the current way to do it, in those modules I have to specify the module name two times (once at the top and once at the bottom, unless I use some compile-time syntheses of the version identifier from the current module name). Bye, bearophile
Re: DIP(?) Warning to facilitate porting to other archs
On Thursday, 1 May 2014 at 11:17:10 UTC, Temtaime wrote: Hi everyone. I think it's need to have -w64(or other name, offers ?) flag that warns if code may not compile on other archs. Example: size_t a; uint b = a; // ok on 32 without a warning but fail on 64 with error And on 32 with -w64 it'll be : Warning : size_t.sizeof may be greater than 32 bit What you thinks ? Should i create proposal or nobody cares about porting and it's useless ? Any ideas are welcome. Why not? I had some minor difficulties porting from 32 to 64bit. I think it's a good idea to let the user know beforehand that there might be problems when porting. If you find that out when you are already on a 64 or 32bit machine, it's a bit annoying to go back and change the code.
Re: DIP(?) Warning to facilitate porting to other archs
Temtaime: I think it's need to have -w64(or other name, offers ?) flag that warns if code may not compile on other archs. Some of the things it has to guard against: void main() { size_t x; ptrdiff_t y; uint r1 = x; // warn int r2 = x; // warn uint r3 = y; // warn int r4 = y; // warn char[] data; foreach (uint i, c; data) {} // warn foreach (int i, c; data) {} // warn } Is something missing? Bye, bearophile
HILT 2014
Any one interested in writing a little paper about D language used for medium-integrity software systems? :-) http://lambda-the-ultimate.org/node/4943 http://sigada.org/conf/hilt2014/ While D isn't Ada, I think it's plenty better than using C plus the handcuffs (http://en.wikipedia.org/wiki/MISRA_C ). Bye, bearophile
Re: Isolated by example
On Fri, 02 May 2014 02:51:47 -0400, deadalnix deadal...@gmail.com wrote: First the original post I made on this forum : http://forum.dlang.org/thread/kluaojijixhwigouj...@forum.dlang.org#post-kluaojijixhwigoujeip:40forum.dlang.org Now some sample code/explanation to get it better. Isolated in a proposal adapted from an experiment made in C# for D. It introduces a new qualifier. The qualifier is necessary on function signatures (inference is possible to some extent, like for pure functions), object/struct fields and globals. It is inferred on local variables (but can be explicited if one wish). An isolated is in an 'island'. The 'island' is implicit and tracked by the compiler. We have one immutable and one shared island. We have one TL island per thread. And we can have an infinity of isolated island. An isolated is consumed when: - it is returned - it is passed as argument - it is assigned to another island When an isolated goes out of scope without being consumed, the compiler can free the whole island: void foo() { A a = new A(); // a is isolated if A's ctor allows. // a is not consumed. The compiler can insert memory freeing. } As we can see, it allows the compiler to do some freeing for us to reduce GC pressure. Manu why aren't you already here cheering ? When an isolated is consumed, the island it is in is merged into the island that consumes it. All reference to the island become write only until next write. void foo() { A a = new A(); // a is isolated immutable b = a; // a's island is merged into immutable // a's island has been consumed. a is not readable at this point. When you say consumed, you mean it statically cannot be used, not that it was set to null or something, right? // a.foo() // Error a = new A(); // OK a.foo(); // OK, we have written in a. OK, but what happens if I do this? auto c = a; a.foo(); At this point, a was inferred typed as isolated, but in this section, it doesn't have to be. Will the type change in different parts of the function? Or will this simply be statically disallowed? } So far, we saw that isolated helps to construct const/immutable/shared objects and can be used by the compiler to insert free. isolated also help to bridge the RC world and the GC world. struct RC(T) is(isReferenceType!T) { private T mystuff; this(isolated T stuff) { mystuff = stuff; } // All code to do ref counting goes here... } Here, the RC struct must be constructed with an isolated. To do so, the isolated have to be passed as argument: it is consumed. As a result, the RC struct can be sure that it has the only usable reference to stuff that is around. mystuff should be marked isolated too, no? Now some concurrency goodies: void foo() { auto tid = spawn(spawnedFunc, thisTid); A a = new A(); send(tid, a); // OK, a is an isolated. // a can't be used here anymore as it is consumed. // You can simply pass isolated around across thread safely. } Now that is pretty sweet. First we don't have to do the crazy and unsafe dance of cast to shared and cast back to me and this is actually safe. Go's type system do is not safe across channel, that puts us ahead in one of the things go does best. Yes, I really like this idea. This is what is missing from the type system. I read in your previous post that shared can benefit from using isolated, by not having to lock all sub-objects. But I'm confused as to how that would work. If a variable is shared, by default, it can be passed around. But wouldn't the passing of the shared variable mean you have to disallow access to the isolated member? The compiler would have to be aware of the locking protection and enforce it. Essentially, you could access a locked isolated variable, but not a shared isolated variable. These kinds of requirements need to be specified somehow. Also, when you call a method, how does the island get handled? abstract class A { void foo(ref A other) { other = this;} void bar(int n); } class B { private isolated int x; void bar(int n) { x = n; } // I'm assuming this is ok, right? } auto b = new B; A a; b.foo(a); At this point, both b and a refer to the same object. How does the compiler know what to prevent here? At what point does the island become inaccessible? Another issue is with delegates. They have no reliable type for the context pointer. -Steve
Re: DIP61: redone to do extern(C++,N) syntax
On Fri, 02 May 2014 01:22:12 +0100, deadalnix deadal...@gmail.com wrote: On Thursday, 1 May 2014 at 10:03:21 UTC, Regan Heath wrote: On Wed, 30 Apr 2014 20:56:15 +0100, Timon Gehr timon.g...@gmx.ch wrote: If this is a problem, I guess the most obvious alternatives are to: 1. Get rid of namespace scopes. Require workarounds in the case of conflicting definitions in different namespaces in the same file. (Eg. use a mixin template.) I'd presume this would not happen often. 2. Give the global C++ namespace a distinctive name and put all other C++ namespaces below it. This way fully qualified name lookup will be reliable. 3. Use the C++ namespace for mangling, but not lookup. C++ symbols will belong in the module they are imported into, and be treated exactly the same as a D symbol, e.g. 1. The whole point of C++ namespace is to avoid that. That is going to happen. Probably less in D as we have module scoping. But that makes it impossible to port many C++ headers. 2. Creating a new name lookup mechanism is the kind of idea that sound good but ends up horribly backfiring. There is all kind of implications and it affect every single identifier resolution. You don't want to mess with that (especially since it is already quite badly defined in the first place). 3. That makes it impossible to port some C++ headers just as 1. #1 and #3 are essentially the same thing, and are how C# interfaces with .. well C, not C++ granted. But, how does this make it impossible to port some C++ headers? Were you thinking.. [a.cpp/h] namespace a { void foo(); } [b.cpp/h] namespace b { void foo(); } [header.h] - header to import #include a.h #include b.h [my.d] - our port extern(c++, a) foo(); extern(c++, b) foo(); // oh, oh! ? Because the solution is.. [a.d] extern(c++, a) foo(); [b.d] extern(c++, b) foo(); [my.d] import a; import b; // resolve the conflict using the existing D mechanisms, or call them using a.foo, b.foo. In essence we're re-defining the C++ namespace(s) as a D one(s) and we have complete flexibility about how we do it. We can expose C++ symbols in any D namespace we like, we can hide/pack others away in a cpp or util namespace if we prefer. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Re: DIP61: redone to do extern(C++,N) syntax
On Thu, 01 May 2014 21:44:10 -0400, Walter Bright newshou...@digitalmars.com wrote: On 5/1/2014 5:33 PM, deadalnix wrote: On Thursday, 1 May 2014 at 18:44:36 UTC, Walter Bright wrote: On 4/27/2014 12:54 PM, Walter Bright wrote: http://wiki.dlang.org/DIP61 Now with pull request: https://github.com/D-Programming-Language/dmd/pull/3517 Does that create a new named scope ? Yes. And regular D identifier resolution rule ? Yes. IF yes, that's awesome news ! I am rather pleased with how it turned out :-) Can you explain to people who don't understand DMD code, does this exactly implement the DIP? The two questions above imply that the DIP isn't enough to answer those questions... -Steve
Re: Isolated by example
On Fri, 02 May 2014 09:50:07 -0400, Meta jared...@gmail.com wrote: On Friday, 2 May 2014 at 06:51:49 UTC, deadalnix wrote: An isolated is consumed when: - it is returned - it is passed as argument - it is assigned to another island This will not work well with UFCS. isolated int[] ints = new int[](10); //put looks like a member function, but //this desugars to put(ints, 3), so ints //is consumed ints.put(3); Some interaction with pure would be in order. I don't think ints.put(3) should consume the ints. However, you have just lost the data you put, since ints cannot have any other reference to it! -Steve
Re: Isolated by example
On Friday, 2 May 2014 at 06:51:49 UTC, deadalnix wrote: An isolated is consumed when: - it is returned - it is passed as argument - it is assigned to another island This will not work well with UFCS. isolated int[] ints = new int[](10); //put looks like a member function, but //this desugars to put(ints, 3), so ints //is consumed ints.put(3);
Re: D equivalent of the X macro?
On 02/05/14 02:22, H. S. Teoh via Digitalmars-d wrote: I was reading this article of Walter's: http://www.drdobbs.com/cpp/the-x-macro/228700289 Which is a neat trick that I wish I'd known back when I was writing C/C++. But the thought crossed my mind: what's the D equivalent of the X macro, since D doesn't have macros? Any ideas? The Color example is already handled by D, if I recall correctly. -- /Jacob Carlborg
Re: D For A Web Developer
On 01/05/14 21:55, Marc Schütz schue...@gmx.net wrote: You're probably right. I thought that changed in a recent release, but can't find it anymore. I don't know. I wouldn't trust it. It's the behavior in Rails 3. I haven't used Rails 4 yet. -- /Jacob Carlborg
Re: HILT 2014
On Friday, 2 May 2014 at 13:26:00 UTC, bearophile wrote: Any one interested in writing a little paper about D language used for medium-integrity software systems? :-) http://lambda-the-ultimate.org/node/4943 http://sigada.org/conf/hilt2014/ While D isn't Ada, I think it's plenty better than using C plus the handcuffs (http://en.wikipedia.org/wiki/MISRA_C ). Bye, bearophile Reminds me of Mr. Hilter in one of Monty Python's sketches. (https://en.wikipedia.org/wiki/Hilter_(character)#12._The_Naked_Ant)
Re: Parallel execution of unittests
On 5/1/14, 8:04 PM, Jason Spencer wrote: On Thursday, 1 May 2014 at 21:40:38 UTC, Andrei Alexandrescu wrote: I'll be blunt. What you say is technically sound (which is probably why you believe it is notable)... Well, I suppose that's not the MOST insulting brush-off I could hope for, but it falls short of encouraging me to contribute ideas for the improvement of the language. Sorry, and that's great. Thanks! -- Andrei
Re: Parallel execution of unittests
On 5/1/14, 8:04 PM, Jason Spencer wrote: On Thursday, 1 May 2014 at 21:40:38 UTC, Andrei Alexandrescu wrote: I'll be blunt. What you say is technically sound (which is probably why you believe it is notable)... Well, I suppose that's not the MOST insulting brush-off I could hope for, but it falls short of encouraging me to contribute ideas for the improvement of the language. I need to make an amend to this because indeed it's more than 2 std deviations away from niceness: I have a long history of ideas with a poor complexity/usefulness ratio, and I now wish I'd received such a jolt. -- Andrei
Re: More radical ideas about gc and reference counting
On 5/2/14, 1:34 AM, Paolo Invernizzi wrote: On Thursday, 1 May 2014 at 21:29:19 UTC, Andrei Alexandrescu wrote: On 5/1/14, 1:19 PM, H. S. Teoh via Digitalmars-d wrote: On Thu, May 01, 2014 at 01:03:06PM -0700, Andrei Alexandrescu via Digitalmars-d wrote: On 5/1/14, 12:52 PM, Nordlöw wrote: into a class. I'm inclined to say that we should outright prohibit that, That can't happen. Why is that? (1) Too much breakage, (2) would disallow a ton of correct code, (3) no reasonable alternative to propose. We'd essentially hang our users out to dry. -- Andrei Isn't this what we're already doing by (eventually) getting rid of class dtors? Not even close. (1) A lot less breakage, (2) disallowed code was already not guaranteed to work, (3) reasonable alternatives exist. Andrei I have 165k lines of code to review for that change... I would not call it a minor breakage... I didn't. I said a lot less that straight out disallowing struct members. -- Andrei
Re: More radical ideas about gc and reference counting
On 5/2/14, 12:07 AM, deadalnix wrote: The breakage in the proposal is already massive. For instance, GtkD won't work. And other example have been presented. Yah, prolly we can't go that far. -- Andrei
Re: Isolated by example
On 5/2/14, 2:24 AM, deadalnix wrote: On Friday, 2 May 2014 at 09:05:13 UTC, Walter Bright wrote: On 5/1/2014 11:51 PM, deadalnix wrote: First the original post I made on this forum : http://forum.dlang.org/thread/kluaojijixhwigouj...@forum.dlang.org#post-kluaojijixhwigoujeip:40forum.dlang.org It's nearly the same as http://wiki.dlang.org/DIP29 except that DIP29 tries to use a library type to take the role of 'isolated' in your proposal. DIP29 is not safe, does not help to construct immutables, is not inferred, do not provide aliasing infos for the optimizer, etc... I think a more detailed comparison would do well here. My understanding of DIP29 is quite at odds with these claims. Andrei
Re: More radical ideas about gc and reference counting
On 5/2/14, 3:09 AM, Marc Schütz schue...@gmx.net wrote: On Thursday, 1 May 2014 at 22:23:46 UTC, H. S. Teoh via Digitalmars-d wrote: On Thu, May 01, 2014 at 03:10:04PM -0700, Walter Bright via Digitalmars-d wrote: The thing is, GC is a terrible and unreliable method of managing non-memory resource lifetimes. Destructors for GC objects are not guaranteed to ever run. If you do have struct with a destructor as a field in a class, you've got, at minimum, suspicious code and a latent bug. Exactly!!! This is why I said we should ban the use of structs with dtors as a field in a class. No, not in a class, but in any GC-managed object. It's unfortunate that class currently implies GC. So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei
Re: More radical ideas about gc and reference counting
On 5/2/14, Andrei Alexandrescu via Digitalmars-d digitalmars-d@puremagic.com wrote: So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei I suggest tracking these ideas on a wiki page so we don't lose track of what the latest proposal is (we'll run in circles otherwise). These threads tend to literally explode with posts from everyone. :)
Re: More radical ideas about gc and reference counting
On Friday, 2 May 2014 at 15:06:59 UTC, Andrei Alexandrescu wrote: On 5/2/14, 3:09 AM, Marc Schütz schue...@gmx.net wrote: On Thursday, 1 May 2014 at 22:23:46 UTC, H. S. Teoh via Digitalmars-d wrote: On Thu, May 01, 2014 at 03:10:04PM -0700, Walter Bright via Digitalmars-d wrote: The thing is, GC is a terrible and unreliable method of managing non-memory resource lifetimes. Destructors for GC objects are not guaranteed to ever run. If you do have struct with a destructor as a field in a class, you've got, at minimum, suspicious code and a latent bug. Exactly!!! This is why I said we should ban the use of structs with dtors as a field in a class. No, not in a class, but in any GC-managed object. It's unfortunate that class currently implies GC. So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Well, that would be the logical consequence... But my main point was actually this: Don't disallow destructors on classes just because they are classes, but disallow them when they are not guaranteed to be called (or calling them is troublesome because of multi-threading), i.e. GC.
std.allocator: false pointers
Hello, I'm currently doing a nice optimization in the tracing code: I use the same bit (per block) to indicate this memory is allocated and this memory is marked as used during tracing. The way the trick works is, at the beginning of a collection the GC marks all of its memory as deallocated (sic!). Then, it traces through pointers and marks BACK as allocated the memory that's actually used. At the end of tracing, there's no need to do anything - what's used stays used, the rest is free by default. This is unlike more traditional GCs, which use a SEPARATE bit to mean mark during tracing. At the beginning of the collection, these mark bits are set to 0. Then collection proceeds and marks with 1 all blocks that are actually used. As the last step, collection deallocates all blocks that were marked with 0 and were previously allocated. So the optimization consumes less memory and saves one pass. It does have a disadvantage, however. Consider a false pointer. It will claim that a block that's free is actually occupied, and the implementation can't distinguish because it conflates the mark and the allocated bit together. So it's possible that at the end of collection there are blocks allocated that previously weren't. The optimization is therefore sensitive to false pointers. Thoughts? How bad are false pointers (assuming we fix globals, which only leaves the stack and registers)? Andrei
Re: More radical ideas about gc and reference counting
On Wednesday, 30 April 2014 at 20:21:33 UTC, Andrei Alexandrescu wrote: I think there's no need to argue that in this community. The GC never guarantees calling destructors even today, so this decision would be just a point in the definition space (albeit an extreme one). I think I (we) need a bit of clarification. Docs in http://dlang.org/class.html#destructors states that: The garbage collector calls the destructor function when the object is deleted. As far as I understand, this means that destructors are always called when an instance is collected. Is this right? Doesn't this mean that destructors are guaranteed to run for unreferenced objects if we force the GC to do a full collect cycle?
Re: DIP(?) Warning to facilitate porting to other archs
On Thu, 01 May 2014 11:17:09 + Temtaime via Digitalmars-d digitalmars-d@puremagic.com wrote: Hi everyone. I think it's need to have -w64(or other name, offers ?) flag that warns if code may not compile on other archs. Example: size_t a; uint b = a; // ok on 32 without a warning but fail on 64 with error And on 32 with -w64 it'll be : Warning : size_t.sizeof may be greater than 32 bit What you thinks ? Should i create proposal or nobody cares about porting and it's useless ? Any ideas are welcome. The compiler doesn't even know that size_t exists. It's just an alias in object_.d. So, it could be fairly involved to get the compiler to warn about something like this. And while in some respects, this would be nice to have, I don't think that it's actually a good idea. IMHO, the compiler pretty much has no business warning about anything. As far as the compiler is concerned, everything should be either an error or nothing (and Walter agrees with me on this; IIRC, the only reason that he added warnings in the first place was as an attempt to appease some folks). About the only exception would be deprecation-related warnings, because those are items that aren't currently errors but are going to be errors. If warnings are in the compiler, programmers are forced to fix them as if they were errors (because it's bad practice to leave compiler warnings in your build), and they can actually affect what does and doesn't compile thanks to the -w flag (which can be particularly nasty when stuff like template constraints get involved). Warnings belong in lint-like tools where the user can control what they want to be warned about, including things that would be useful to them but most other folks wouldn't care about. So, unless you're suggesting that we make it an error to assign a value of size_t to a uint, I don't think that it makes any sense for the compiler to say anything about this, and given the fact that it doesn't know anything about size_t anyway, it's probably not particularly reasonable to have the compiler warn about it even if we agreed that it would be a good idea. D is ideally suited to writing lint-like tools, and as I understand it, Brian Schott has written one. I don't know what state it's currently in or what exactly it can warn you about at this point, but I think that it would be better to look at putting warnings like this in such a tool than to try and get it put in the compiler. - Jonathan M Davis
Re: More radical ideas about gc and reference counting
On 5/2/14, 9:04 AM, fra wrote: On Wednesday, 30 April 2014 at 20:21:33 UTC, Andrei Alexandrescu wrote: I think there's no need to argue that in this community. The GC never guarantees calling destructors even today, so this decision would be just a point in the definition space (albeit an extreme one). I think I (we) need a bit of clarification. Docs in http://dlang.org/class.html#destructors states that: The garbage collector calls the destructor function when the object is deleted. As far as I understand, this means that destructors are always called when an instance is collected. Is this right? Doesn't this mean that destructors are guaranteed to run for unreferenced objects if we force the GC to do a full collect cycle? False pointers make it seem like unreferenced objects are in fact referenced, so fewer destructors will run than there should. -- Andrei
Re: More radical ideas about gc and reference counting
On 2014-05-02 17:38, Marc Schütz schue...@gmx.net wrote: But my main point was actually this: Don't disallow destructors on classes just because they are classes, but disallow them when they are not guaranteed to be called (or calling them is troublesome because of multi-threading), i.e. GC. Tango for D1 added a dispose method to Object. This method was called when delete or scope was used. -- /Jacob Carlborg
Re: Parallel execution of unittests
On Friday, 2 May 2014 at 14:59:50 UTC, Andrei Alexandrescu wrote: I need to make an amend to this because indeed it's more than 2 std deviations away from niceness: I have a long history of ideas with a poor complexity/usefulness ratio, and I now wish I'd received such a jolt. -- Andrei I appreciate that, and can accept it in the spirit of mentoring and helpfulness. What might work even better for me, though, is to forego the assumption that I need such a jolt or that you are the person, in this forum at least, to provide it and simply address the merits or lack thereof of the suggestion as made. If we can't agree that a method, direct or indirect, to control the order of UTs is appropriate, then we should opt for the status quo. By my reading of this thread, that leaves us with no consensus that UTs MUST be order-independent, but that being able to parallelize is a good thing. It seems we can: 1. leave defaults as they are and make parallelization an option, or 2. make it the language model and allow people to dissent with an option I can agree with Andre that you'd rather have a solid, well-defined language that works in most cases without too many buttons, switches, and levers. I'm just not sure that jives with easiest is safest and don't impose a model, provide a tool. To me, improving the performance of a non-performance-critical aspect does not weigh enough to counterbalance a safety risk and model imposition. How about others? Test names seems pretty much agreed to. I think the idea of making everything available to the druntime would let pretty much anyone do what they need.
Re: std.allocator: false pointers
Well, in a 64-bit address space, the false pointer issue is almost mute, the issue comes in when you try to apply this design to 32-bit, where the false pointer issue is more prevelent. Is the volume of memory saved by this really worth it? Another thing to consider is that this makes it impossible to pre-allocate blocks of varying sizes for absurdly fast allocations via atomic linked lists, in most cases literally a single `lock cmpxchg`. On 5/2/14, Andrei Alexandrescu via Digitalmars-d digitalmars-d@puremagic.com wrote: Hello, I'm currently doing a nice optimization in the tracing code: I use the same bit (per block) to indicate this memory is allocated and this memory is marked as used during tracing. The way the trick works is, at the beginning of a collection the GC marks all of its memory as deallocated (sic!). Then, it traces through pointers and marks BACK as allocated the memory that's actually used. At the end of tracing, there's no need to do anything - what's used stays used, the rest is free by default. This is unlike more traditional GCs, which use a SEPARATE bit to mean mark during tracing. At the beginning of the collection, these mark bits are set to 0. Then collection proceeds and marks with 1 all blocks that are actually used. As the last step, collection deallocates all blocks that were marked with 0 and were previously allocated. So the optimization consumes less memory and saves one pass. It does have a disadvantage, however. Consider a false pointer. It will claim that a block that's free is actually occupied, and the implementation can't distinguish because it conflates the mark and the allocated bit together. So it's possible that at the end of collection there are blocks allocated that previously weren't. The optimization is therefore sensitive to false pointers. Thoughts? How bad are false pointers (assuming we fix globals, which only leaves the stack and registers)? Andrei
Re: std.allocator: false pointers
On Friday, 2 May 2014 at 15:55:06 UTC, Andrei Alexandrescu wrote: Hello, I'm currently doing a nice optimization in the tracing code: I use the same bit (per block) to indicate this memory is allocated and this memory is marked as used during tracing. The way the trick works is, at the beginning of a collection the GC marks all of its memory as deallocated (sic!). Then, it traces through pointers and marks BACK as allocated the memory that's actually used. At the end of tracing, there's no need to do anything - what's used stays used, the rest is free by default. This is unlike more traditional GCs, which use a SEPARATE bit to mean mark during tracing. At the beginning of the collection, these mark bits are set to 0. Then collection proceeds and marks with 1 all blocks that are actually used. As the last step, collection deallocates all blocks that were marked with 0 and were previously allocated. So the optimization consumes less memory and saves one pass. It does have a disadvantage, however. Consider a false pointer. It will claim that a block that's free is actually occupied, and the implementation can't distinguish because it conflates the mark and the allocated bit together. So it's possible that at the end of collection there are blocks allocated that previously weren't. The optimization is therefore sensitive to false pointers. Thoughts? How bad are false pointers (assuming we fix globals, which only leaves the stack and registers)? If destructors are still going to be called (see other thread), this trick is dangerous, because the resurrected objects might later on be destroyed again (double free). I'm aware that this is still about untyped allocators, but if they are going to be used as a basis for typed allocators, things like this need to be considered already at this stage.
Re: std.allocator: false pointers
On Fri, 02 May 2014 11:55:11 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Hello, I'm currently doing a nice optimization in the tracing code: I use the same bit (per block) to indicate this memory is allocated and this memory is marked as used during tracing. The way the trick works is, at the beginning of a collection the GC marks all of its memory as deallocated (sic!). Then, it traces through pointers and marks BACK as allocated the memory that's actually used. At the end of tracing, there's no need to do anything - what's used stays used, the rest is free by default. This is unlike more traditional GCs, which use a SEPARATE bit to mean mark during tracing. At the beginning of the collection, these mark bits are set to 0. Then collection proceeds and marks with 1 all blocks that are actually used. As the last step, collection deallocates all blocks that were marked with 0 and were previously allocated. So the optimization consumes less memory and saves one pass. It does have a disadvantage, however. Consider a false pointer. It will claim that a block that's free is actually occupied, and the implementation can't distinguish because it conflates the mark and the allocated bit together. So it's possible that at the end of collection there are blocks allocated that previously weren't. The optimization is therefore sensitive to false pointers. Thoughts? How bad are false pointers (assuming we fix globals, which only leaves the stack and registers)? False pointers are less of a problem in 64-bit code, but you can run into worse issues. If you are not zeroing the memory when deallocating, then if it mysteriously comes alive again, it has the ghost of what could be a pointer to other code. Your blocks are more likely to resurrect once one of them resurrects. Why not keep the 3 states, but just treat unmarked blocks as free? Then the next time you go through tracing, change the bit to free if it was already marked. This doesn't save on the bit space, but I think the savings there is minimal anyway. However, it does allow the final pass to be saved. -Steve
Re: D For A Web Developer
On Wednesday, 30 April 2014 at 07:14:34 UTC, Jacob Carlborg wrote: I think one of the great things about Rails and Ruby is all the libraries and plugins that are available. If I want to do something, in RoR there's a big chance there's already a library for that. In D, there's a big chance I need to implement it myself. this has been the fundamental issue for me. its not just missing libs, its libs that are surfaced via a C-binding, which in my limited experience have been difficult to use and make portability hard. I think D is a superior language to Go, but Go has a very complete SDK and its all written in Go, so I don't have to worry about chasing down native libs to install. brad
Re: DIP61: redone to do extern(C++,N) syntax
On Friday, 2 May 2014 at 09:25:34 UTC, Ola Fosheim Grøstad wrote: On Friday, 2 May 2014 at 07:44:50 UTC, Walter Bright wrote: Not at all, any more than you have to do that for C names. The difference is that C names tend to have their namespace embedded: framework_structname_function() You are only proving that you are missing the point completely.
Re: std.allocator: false pointers
On Fri, 02 May 2014 13:26:41 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: Why not keep the 3 states, but just treat unmarked blocks as free? Then the next time you go through tracing, change the bit to free if it was already marked. Sorry, if it was already *unmarked* (or marked as garbage). essentially: enum GCState {free, allocated, garbage} GCState memoryBlocks[]; fullCollect() { foreach(ref st; memoryBlocks) { final switch(st) { case GCState.free: break; case GCState.allocated: st = GCState.garbage; break; case GCState.garbage: st = GCState.free; break; } } ... // run mark/sweep setting garbage to allocated for reachable blocks }
Re: std.allocator: false pointers
On 5/2/14, 10:15 AM, Orvid King via Digitalmars-d wrote: Well, in a 64-bit address space, the false pointer issue is almost mute, the issue comes in when you try to apply this design to 32-bit, where the false pointer issue is more prevelent. Is the volume of memory saved by this really worth it? It's the time savings that are most important. Another thing to consider is that this makes it impossible to pre-allocate blocks of varying sizes for absurdly fast allocations via atomic linked lists, in most cases literally a single `lock cmpxchg`. Those can be accommodated I think. Andrei
Re: std.allocator: false pointers
On 5/2/14, 10:12 AM, Marc Schütz schue...@gmx.net wrote: If destructors are still going to be called (see other thread), this trick is dangerous, because the resurrected objects might later on be destroyed again (double free). Yah, forgot to mention this trick is only applicable to what I call the passive heap. Thanks! -- Andrei
Re: Isolated by example
On Friday, 2 May 2014 at 09:41:48 UTC, Marc Schütz wrote: To make this more useful, turn it into a requirement. It gets us deterministic destruction for reference types. Example: ... isolated tmp = new Tempfile(); // use tmp ... // tmp is guaranteed to get cleaned up } No because... This needs more elaboration. The problem is control flow: isolated a = new A(); if(...) { immutable b = a; ... } a.foo(); // -- ??? (Similar for loops and gotos.) There are several possibilities: Of this. 1) isolateds must be consumed either in every branch or in no branch, and this is statically enforced by the compiler. 2) It's just forbidden, but the compiler doesn't guarantee it except where it can. 3) The compiler inserts a hidden variable to track the status of the isolated, and asserts if it is used while it's in an invalid state. This can be elided if it can be proven to be unnecessary. These solutions are all unnecessary restrictive. If the variable may be consumed it is consumed. This is a problem solved for ages for non nullables, there is no need to brainstorm here. I would prefer 3), as it is the most flexible. I also believe a similar runtime check is done in other situations (to guard against return of locals in @safe code, IIRC). 3 is idiotic as the compiler can't ensure anything at compile time. Random failure at runtime for valid code is not desirable.
Re: std.allocator: false pointers
On 5/2/14, 10:26 AM, Steven Schveighoffer wrote: False pointers are less of a problem in 64-bit code, but you can run into worse issues. If you are not zeroing the memory when deallocating, then if it mysteriously comes alive again, it has the ghost of what could be a pointer to other code. Your blocks are more likely to resurrect once one of them resurrects. Good point. Why not keep the 3 states, but just treat unmarked blocks as free? Then the next time you go through tracing, change the bit to free if it was already marked. This doesn't save on the bit space, but I think the savings there is minimal anyway. However, it does allow the final pass to be saved. That's a great idea. Thanks! Andrei
Re: std.allocator: false pointers
On 5/2/14, 10:33 AM, Steven Schveighoffer wrote: On Fri, 02 May 2014 13:26:41 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: Why not keep the 3 states, but just treat unmarked blocks as free? Then the next time you go through tracing, change the bit to free if it was already marked. Sorry, if it was already *unmarked* (or marked as garbage). Yah, understood. Unfortunately I just realized that would require either to keep the bits together or to scan two memory areas when trying to allocate, both of which have disadvantages. Well, I guess I'll go with the post-tracing pass. -- Andrei
Re: std.allocator: false pointers
On Fri, 02 May 2014 14:00:18 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/2/14, 10:33 AM, Steven Schveighoffer wrote: On Fri, 02 May 2014 13:26:41 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: Why not keep the 3 states, but just treat unmarked blocks as free? Then the next time you go through tracing, change the bit to free if it was already marked. Sorry, if it was already *unmarked* (or marked as garbage). Yah, understood. Unfortunately I just realized that would require either to keep the bits together or to scan two memory areas when trying to allocate, both of which have disadvantages. Well, I guess I'll go with the post-tracing pass. -- Andrei What is the problem with keeping the bits together? -Steve
Re: Isolated by example
On Friday, 2 May 2014 at 09:41:48 UTC, Marc Schütz wrote: That's true, but it is also a breaking change, because then suddenly some variables aren't writable anymore (or alternatively, the compiler would have to analyse all future uses of the variable first to see whether it can be inferred isolated, if that's even possible in the general case). I believe it's fine if explicit annotation is required. No, I expect the compiler to backtrack inference when it hits an error, not to infer eagerly, because indeed, the eager inference would be a breaking change.
Re: Isolated by example
Correct me if I'm wrong here, but this seems really similar to how Rust does owned pointers and move semantics. Or is there a large conceptual difference between the two that I'm missing? I believe that the issues that people are bringing up with bad interaction with UFCS, and losing isolated data after passing it to a function, are managed in Rust with the notion of a borrowed pointers. Perhaps something analogous to this could accompany the `isolated` idea?
Re: std.allocator: false pointers
On 5/2/14, 11:07 AM, Steven Schveighoffer wrote: On Fri, 02 May 2014 14:00:18 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/2/14, 10:33 AM, Steven Schveighoffer wrote: On Fri, 02 May 2014 13:26:41 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: Why not keep the 3 states, but just treat unmarked blocks as free? Then the next time you go through tracing, change the bit to free if it was already marked. Sorry, if it was already *unmarked* (or marked as garbage). Yah, understood. Unfortunately I just realized that would require either to keep the bits together or to scan two memory areas when trying to allocate, both of which have disadvantages. Well, I guess I'll go with the post-tracing pass. -- Andrei What is the problem with keeping the bits together? More implementation (I have a BitVector type but not a KBitsVector!k type), and scanning can't be done with fast primitives. -- Andrei
Re: DIP61: redone to do extern(C++,N) syntax
On Friday, 2 May 2014 at 17:34:49 UTC, deadalnix wrote: framework_structname_function() You are only proving that you are missing the point completely. Then I ask you to be graceful and explain it to me.
Re: std.allocator: false pointers
On Fri, 02 May 2014 14:42:52 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/2/14, 11:07 AM, Steven Schveighoffer wrote: What is the problem with keeping the bits together? More implementation (I have a BitVector type but not a KBitsVector!k type), and scanning can't be done with fast primitives. -- Andrei Given a bitvector type, a 2bitvector type can be implemented on top of it. If one bit is free, and another is garbage, you just have to look for any set bits for free blocks. Yes, you have to look through 2x as much memory, but only until you find a free block. -Steve
Re: std.allocator: false pointers
On 5/2/14, 11:50 AM, Steven Schveighoffer wrote: On Fri, 02 May 2014 14:42:52 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/2/14, 11:07 AM, Steven Schveighoffer wrote: What is the problem with keeping the bits together? More implementation (I have a BitVector type but not a KBitsVector!k type), and scanning can't be done with fast primitives. -- Andrei Given a bitvector type, a 2bitvector type can be implemented on top of it. If speed is no issue, sure :o). My intuition is that the TwoBitVector would need certain primitives from BitVector to work well. If one bit is free, and another is garbage, you just have to look for any set bits for free blocks. Yes, you have to look through 2x as much memory, but only until you find a free block. Hmm, so if garbage is 0, then to allocate we'd need to scan for a hole of contiguous zeros (cheap) instead of a checkered pattern (expensive). I'll think about it. Andrei
Re: Isolated by example
On Friday, 2 May 2014 at 17:46:54 UTC, deadalnix wrote: On Friday, 2 May 2014 at 09:41:48 UTC, Marc Schütz wrote: To make this more useful, turn it into a requirement. It gets us deterministic destruction for reference types. Example: ... isolated tmp = new Tempfile(); // use tmp ... // tmp is guaranteed to get cleaned up } No because... This needs more elaboration. The problem is control flow: isolated a = new A(); if(...) { immutable b = a; ... } a.foo(); // -- ??? (Similar for loops and gotos.) There are several possibilities: Of this. 1) isolateds must be consumed either in every branch or in no branch, and this is statically enforced by the compiler. 2) It's just forbidden, but the compiler doesn't guarantee it except where it can. 3) The compiler inserts a hidden variable to track the status of the isolated, and asserts if it is used while it's in an invalid state. This can be elided if it can be proven to be unnecessary. These solutions are all unnecessary restrictive. If the variable may be consumed it is consumed. This is a problem solved for ages for non nullables, there is no need to brainstorm here. I think the situation is different here. For nullables, you wouldn't gain much by more precise tracking. For isolated, as noted, we'd gain deterministic lifetimes for reference types. This is IMO important enough to accept a few minor complications (which are anyway solvable). There would be no more need for the unsafe std.typecons.scoped, which only works for classes anyway, but not slices or pointers. I would prefer 3), as it is the most flexible. I also believe a similar runtime check is done in other situations (to guard against return of locals in @safe code, IIRC). 3 is idiotic as the compiler can't ensure anything at compile time. You can in most cases. Runtime failure is only for the cases where it's not possible. Random failure at runtime for valid code is not desirable. It's not random, and the code was _not_ valid: it tried to use a consumed isolated.
Re: std.allocator: false pointers
On 5/2/14, 11:56 AM, Andrei Alexandrescu wrote: If speed is no issue, sure :o). My intuition is that the TwoBitVector would need certain primitives from BitVector to work well. Heh, however it's implemented, TwoBitVector's very name implies that it's cheap to use ;)
Re: std.allocator: false pointers
On Fri, 02 May 2014 14:56:00 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/2/14, 11:50 AM, Steven Schveighoffer wrote: On Fri, 02 May 2014 14:42:52 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/2/14, 11:07 AM, Steven Schveighoffer wrote: What is the problem with keeping the bits together? More implementation (I have a BitVector type but not a KBitsVector!k type), and scanning can't be done with fast primitives. -- Andrei Given a bitvector type, a 2bitvector type can be implemented on top of it. If speed is no issue, sure :o). My intuition is that the TwoBitVector would need certain primitives from BitVector to work well. If one bit is free, and another is garbage, you just have to look for any set bits for free blocks. Yes, you have to look through 2x as much memory, but only until you find a free block. Hmm, so if garbage is 0, then to allocate we'd need to scan for a hole of contiguous zeros (cheap) instead of a checkered pattern (expensive). I'll think about it. Well, you are looking for one bit set (free), or another bit set (garbage). So the pattern may not be uniform. What you probably want to do is to store the bits close together, but probably not *right* together. That way you can use logic-or to search for bits. -Steve
Re: DIP61: redone to do extern(C++,N) syntax
On Fri, 02 May 2014 15:06:13 -0400, Walter Bright newshou...@digitalmars.com wrote: On 5/2/2014 6:53 AM, Steven Schveighoffer wrote: Can you explain to people who don't understand DMD code, does this exactly implement the DIP? Yes. The two questions above imply that the DIP isn't enough to answer those questions... It follows the scoping and name resolution rules used for template mixins. OK, the questions (that I didn't understand) gave me the impression that you did something different from the DIP. -Steve
Re: DIP61: redone to do extern(C++,N) syntax
On 5/2/2014 6:53 AM, Steven Schveighoffer wrote: Can you explain to people who don't understand DMD code, does this exactly implement the DIP? Yes. The two questions above imply that the DIP isn't enough to answer those questions... It follows the scoping and name resolution rules used for template mixins.
Re: Isolated by example
On Friday, 2 May 2014 at 18:32:13 UTC, Dylan Knutson wrote: Correct me if I'm wrong here, but this seems really similar to how Rust does owned pointers and move semantics. Or is there a large conceptual difference between the two that I'm missing? I believe that the issues that people are bringing up with bad interaction with UFCS, and losing isolated data after passing it to a function, are managed in Rust with the notion of a borrowed pointers. Perhaps something analogous to this could accompany the `isolated` idea? I don't think bolting Rust's type system onto D is a viable option at this point.
Re: Isolated by example
On Friday, 2 May 2014 at 18:32:13 UTC, Dylan Knutson wrote: Correct me if I'm wrong here, but this seems really similar to how Rust does owned pointers and move semantics. Or is there a large conceptual difference between the two that I'm missing? Yes, there are some parallels, although there's no merging of islands in Rust, AFAIK. I believe that the issues that people are bringing up with bad interaction with UFCS, and losing isolated data after passing it to a function, are managed in Rust with the notion of a borrowed pointers. Perhaps something analogous to this could accompany the `isolated` idea? This will definitely need more thought. I also don't think that UFCS is special; methods should probably be just treated like free functions with a hidden parameter, just as they are for pure. Purity might be one part of the solution here: pure functions can for example take isolated arguments without consuming them, iff their parameters and the isolated variable have incompatible types in that it mustn't be possible to store a reference to one in the other. (The same is true for scoped variables: they cannot escape their scope, even if the pure function's params are not marked as scope.) Another option are scope/in parameters. All of this purity, isolated, scope, uniqueness business is closely intertwined... there just needs to be an elegant way to make it all fit together.
Re: DIP(?) Warning to facilitate porting to other archs
On 5/2/2014 12:21 PM, Jonathan M Davis via Digitalmars-d wrote: As far as the compiler is concerned, everything should be either an error or nothing (and Walter agrees with me on this; It would be nice if all code *could* be considered either good or error without causing problems. But we don't live in a perfect back-and-white reality, and forcing everything into dichotomy doesn't always work out so well. Warnings belong in lint-like tools where the user can control what they want to be warned about, Warnings ARE a built-in lint-like tool. On top of that, lint itself proves that lint tends to not get used. If it's too hard for people to occasionally toss in a -m32 to check if that works, then no lint-like tool is going to solve the issue either. That said, I do think people are underestimating the difficulty of this enhancement, and overestimating the benefit. It's difficult because size_t is (by design) only an alias and there are issues with making it a separate type. And it's not really worth the difficulty of getting around all that because it's it's already trivially checked whenever you want by tossing in an -m32. I think this enhancement is a great *idea*, but not realistic.
Re: D For A Web Developer
On 4/30/2014 4:17 PM, Ola Fosheim Grøstad ola.fosheim.grostad+dl...@gmail.com wrote: On Wednesday, 30 April 2014 at 20:00:59 UTC, Russel Winder via (*) Are we allowed to have gotos any more since Dijkstra's letter? You better ask the dining philosophers. Nah, they're too busy trying to figure out how to use their forks.
Re: Isolated by example
On Friday, 2 May 2014 at 18:10:42 UTC, deadalnix wrote: On Friday, 2 May 2014 at 09:41:48 UTC, Marc Schütz wrote: That's true, but it is also a breaking change, because then suddenly some variables aren't writable anymore (or s/writable/readable/ of course alternatively, the compiler would have to analyse all future uses of the variable first to see whether it can be inferred isolated, if that's even possible in the general case). I believe it's fine if explicit annotation is required. No, I expect the compiler to backtrack inference when it hits an error, not to infer eagerly, because indeed, the eager inference would be a breaking change. This might work, but would require defining an order of evaluation for static if's co, because you could create logical cycles otherwise.
Re: More radical ideas about gc and reference counting
On Friday, 2 May 2014 at 16:20:47 UTC, Andrei Alexandrescu wrote: On 5/2/14, 9:04 AM, fra wrote: On Wednesday, 30 April 2014 at 20:21:33 UTC, Andrei Alexandrescu wrote: I think there's no need to argue that in this community. The GC never guarantees calling destructors even today, so this decision would be just a point in the definition space (albeit an extreme one). I think I (we) need a bit of clarification. Docs in http://dlang.org/class.html#destructors states that: The garbage collector calls the destructor function when the object is deleted. As far as I understand, this means that destructors are always called when an instance is collected. Is this right? Doesn't this mean that destructors are guaranteed to run for unreferenced objects if we force the GC to do a full collect cycle? False pointers make it seem like unreferenced objects are in fact referenced, so fewer destructors will run than there should. -- Andrei Yeah, you have to read the fine print: collection implies destruction *but* no guarantees the collection will actually ever happen.
Re: More radical ideas about gc and reference counting
On Friday, 2 May 2014 at 15:06:59 UTC, Andrei Alexandrescu wrote: So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Well, that's always been the case, and even worst, since in a dynamic array, destructor are guaranteed to *never* be run. Furthermore, given the append causes relocation which duplicates, you are almost *guaranteed* to leak your destructors. You just can't keep track of the usage of a naked dynamic array. This usually comes as a great surprise to users in .learn. It's also the reason why using File[] never ends well...
Re: DIP(?) Warning to facilitate porting to other archs
On Friday, 2 May 2014 at 19:54:43 UTC, Nick Sabalausky wrote: Warnings ARE a built-in lint-like tool. On top of that, lint itself proves that lint tends to not get used. If it's too hard for people to occasionally toss in a -m32 to check if that works, then no lint-like tool is going to solve the issue either. One solution to this is to have your editor run the lint tool: http://i.imgur.com/w7SgbnN.png
Re: More radical ideas about gc and reference counting
On Fri, May 02, 2014 at 09:03:15PM +, monarch_dodra via Digitalmars-d wrote: On Friday, 2 May 2014 at 15:06:59 UTC, Andrei Alexandrescu wrote: So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Well, that's always been the case, and even worst, since in a dynamic array, destructor are guaranteed to *never* be run. Furthermore, given the append causes relocation which duplicates, you are almost *guaranteed* to leak your destructors. You just can't keep track of the usage of a naked dynamic array. This usually comes as a great surprise to users in .learn. It's also the reason why using File[] never ends well... This is why I'm unhappy with the way this is going. Current behaviour of structs with dtors is already fragile enough, now we're pulling the rug out from under classes as well. So that will be yet another case where dtors won't work as expected. I'm getting the feeling that dtors were bolted on as an afterthought, and only works properly for a very narrow spectrum of use cases. Rather than expand the usable cases, we're proposing to reduce them (by getting rid of class dtors). I can't see *this* ending well either. :-( T -- Democracy: The triumph of popularity over principle. -- C.Bond
Re: DIP(?) Warning to facilitate porting to other archs
On Fri, 02 May 2014 15:54:37 -0400 Nick Sabalausky via Digitalmars-d digitalmars-d@puremagic.com wrote: Warnings ARE a built-in lint-like tool. Perhaps, but having them in the compiler is inherently flawed, because you have little-to-no control over what it warns about, and you're forced to essentially treat them as errors, because it's incredibly error-prone to leave any warnings in the build (they mask real problems too easily). As such, it makes no sense to have warnings in the compiler IMHO. On top of that, lint itself proves that lint tends to not get used. True, that is a problem. But if folks really want the warnings, they can go to the extra effort. And I'd much rather err on the side of folks screwing up because they didn't bother to run the tool than having to fix nonexistent problems in my code because someone convinced a compiler dev to make the compiler warn about something that's a problem some of the time but isn't a problem in what I'm actually doing. - Jonathan M Davis
Re: More radical ideas about gc and reference counting
On Fri, 02 May 2014 21:03:15 + monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote: On Friday, 2 May 2014 at 15:06:59 UTC, Andrei Alexandrescu wrote: So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Well, that's always been the case, and even worst, since in a dynamic array, destructor are guaranteed to *never* be run. Furthermore, given the append causes relocation which duplicates, you are almost *guaranteed* to leak your destructors. You just can't keep track of the usage of a naked dynamic array. This usually comes as a great surprise to users in .learn. It's also the reason why using File[] never ends well... Heck, I probably knew that before, but I had completely forgotten. If you'd asked me yesterday whether struct destructors were run in dynamic arrays, I'd have said yes. And if someone like me doesn't remember that, would you expect the average D programmer to? The current situation is just plain bug-prone. Honestly, I really think that we need to figure out how to make it so that struct destructors are guaranteed to be run so long as the memory that they're in is collected. Without that, having destructors in structs anywhere other than directly on the stack is pretty much broken. - Jonathan M Davis
Re: DIP(?) Warning to facilitate porting to other archs
On Friday, 2 May 2014 at 21:40:09 UTC, Jonathan M Davis via Digitalmars-d wrote: True, that is a problem. But if folks really want the warnings, they can go to the extra effort. Why are we making people go to extra effort to get lint-like functionality if we want it to be something that everyone uses? Whether a linter is a separate logical entity within the compiler or a library that can be hooked into, it should be on by default.
Re: Default arguments in function callbacks not taken into account when instantiating templates has huge security implications
Is this in Bugzilla?
Re: More radical ideas about gc and reference counting
On Fri, May 02, 2014 at 11:44:47PM +0200, Jonathan M Davis via Digitalmars-d wrote: On Fri, 02 May 2014 21:03:15 + monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote: On Friday, 2 May 2014 at 15:06:59 UTC, Andrei Alexandrescu wrote: So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Well, that's always been the case, and even worst, since in a dynamic array, destructor are guaranteed to *never* be run. Furthermore, given the append causes relocation which duplicates, you are almost *guaranteed* to leak your destructors. You just can't keep track of the usage of a naked dynamic array. This usually comes as a great surprise to users in .learn. It's also the reason why using File[] never ends well... Heck, I probably knew that before, but I had completely forgotten. If you'd asked me yesterday whether struct destructors were run in dynamic arrays, I'd have said yes. And if someone like me doesn't remember that, would you expect the average D programmer to? The current situation is just plain bug-prone. Honestly, I really think that we need to figure out how to make it so that struct destructors are guaranteed to be run so long as the memory that they're in is collected. Without that, having destructors in structs anywhere other than directly on the stack is pretty much broken. [...] Thank you, that's what I've been trying to say. Having dtors sometimes run and sometimes not, is very bug-prone -- if for a seasoned D programmer, then how much more for an average D programmer? We need some kind of guarantees. The current situation sux. I might even say we'd have been better off having no dtors in the first place -- at least then it's consistent, you know you always have to cleanup. But the current situation of being neither here nor there, neither always cleaning up nor never, is not a good place to be in. I've to say that the more I look at this, the more I don't like this part of the language. :-/ T -- MASM = Mana Ada Sistem, Man!
Re: Isolated by example
On Friday, 2 May 2014 at 18:32:13 UTC, Dylan Knutson wrote: Correct me if I'm wrong here, but this seems really similar to how Rust does owned pointers and move semantics. Or is there a large conceptual difference between the two that I'm missing? There is some similarity, but Rust system has a bit more capabilities. These extra capability come at great increase in complexity, so I don't think it is worth it. I believe that the issues that people are bringing up with bad interaction with UFCS, and losing isolated data after passing it to a function, are managed in Rust with the notion of a borrowed pointers. Perhaps something analogous to this could accompany the `isolated` idea? Yes, rust handle this with burrowed pointers. You can also handle this by : - Passing data back and forth (via argument, and then return it so the callee get it back). - Using a wrapper of some kind. I don't think getting all the menagerie of Rust pointer types is a good thing. They certainly allows for a lot, but once again, come at great complexity cost. If most of it can be achieved with much lower complexity, that is a win.
Re: Isolated by example
On Friday, 2 May 2014 at 20:10:04 UTC, Marc Schütz wrote: On Friday, 2 May 2014 at 18:10:42 UTC, deadalnix wrote: On Friday, 2 May 2014 at 09:41:48 UTC, Marc Schütz wrote: That's true, but it is also a breaking change, because then suddenly some variables aren't writable anymore (or s/writable/readable/ of course Yes. alternatively, the compiler would have to analyse all future uses of the variable first to see whether it can be inferred isolated, if that's even possible in the general case). I believe it's fine if explicit annotation is required. No, I expect the compiler to backtrack inference when it hits an error, not to infer eagerly, because indeed, the eager inference would be a breaking change. This might work, but would require defining an order of evaluation for static if's co, because you could create logical cycles otherwise. Yes, but this is unrelated to isolated. In fact this is already the case. static if is not deterministic. I've made a proposal to improve the situation: http://wiki.dlang.org/DIP31 But to be fair I'm not quite satisfied. This still leave some room for unspecified results, but is a great improvement over current situation.
Reopening the debate about non-nullable-by-default: initialization of member fields
We are all sick and tired of this debate, but today I've seen a question in Stack Exchange's Programmers board that raises a point I don't recall being discussed here: http://programmers.stackexchange.com/questions/237749/how-do-languages-with-maybe-types-instead-of-nulls-handle-edge-conditions Consider the following code: class Foo{ void doSomething(){ } } class Bar{ Foo foo; this(Foo foo){ doSomething(); this.foo=foo; } void doSomething(){ foo.doSomething(); } } Constructing an instance of `Bar`, of course, segfaults when it calls `doSomething` that tries to call `foo`'s `doSomething`. The non-nullable-by-default should avoid such problems, but in this case it doesn't work since we call `doSomething` in the constructor, before we initialized `foo`. Non-nullable-by-default is usually used in functional languages, where the emphasis on immutability requires a syntax that always allow initialization at declaration, so they avoid this problem elegantly. This is not the case in D - member fields are declared at the body of the class or struct and initialized at the constructor - separate statements that nothing stops you from putting other statements between them. Of course, D does support initialization at declaration for member fields, but this is far from sufficient a solution since very often the information required for setting the member field resides in the constructor's arguments. In the example, we can't really initialize `foo` at the declaration since we are supposed to get it's initial value in the constructor's argument `Foo foo`. I can think of 3 solutions - each with it's own major drawback and each with a flaw that prevents it from actually solving the problem, but I'll write them here anyways: 1) Using a static analysis that probes into function calls. The major drawback is that it'll probably be very hard to implement. The reason it won't work is that it won't be able to probe into overriding methods, which might use an uninitialized member field that the overrided method doesn't use. 2) Disallow calling functions in the constructor before *all* non-nullable member fields are initialized(and of course, the simple static analysis that prevent usage before initialization directly in the constructor code). The major drawback is that sometimes you need to call a function in order to initialize the member field. The reason is best demonstrated with code: class Foo{ void doSomething(){ } } class Bar{ this{ doSomething(); } void doSomething(){ } } class Baz : Bar{ Foo foo; this(Foo foo){ this.foo=foo; } override void doSomething(){ foo.doSomething(); } } `Bar`'s constructor is implicitly called before `Baz`'s constructor. 3) Use a Scala-like syntax where the class' body is a constructor that all other constructors must call, allowing initialization on declaration for member fields in all cases. The major drawback is that this is a new syntax that'll have to be used in order to have non-nullable member fields - which means it'll break almost every existing code that uses classes. Not fun. The reason it won't work is that declarations in the struct\class' body are not ordered. In Scala, for example, this compiles and breaks with null pointer exception when trying to construct `Bar`: class Foo{ def doSomething(){ } } class Bar(foo : Foo){ doSomething(); val m_foo=foo; def doSomething(){ m_foo.doSomething(); } } Also, like the previous two methods, overriding methods breaks it's promises. This issue should be addressed before implementing non-nullable-by-default.
Re: Reopening the debate about non-nullable-by-default: initialization of member fields
Idan Arye: today I've seen a question in Stack Exchange's Programmers board that raises a point I don't recall being discussed here: This program: class A { immutable int x; this() { foo(); x = 1; x = 2; } void foo() { auto y = x; } } void main() {} Gives: temp.d(6,9): Error: immutable field 'x' initialized multiple times So D can tell x is initialized more than 1 time, but it can't tell x is initialized 0 times inside foo(). Bye, bearophile
Re: More radical ideas about gc and reference counting
On Friday, 2 May 2014 at 20:59:46 UTC, monarch_dodra wrote: Yeah, you have to read the fine print: collection implies destruction *but* no guarantees the collection will actually ever happen. That sound like the right balance. Also, make construction of object with destructor @system as there is no way to ensure destructor won't resurrect the object or do some goofy thing with finalized reference it has.
Re: DIP(?) Warning to facilitate porting to other archs
On Fri, 02 May 2014 22:39:12 + Meta via Digitalmars-d digitalmars-d@puremagic.com wrote: On Friday, 2 May 2014 at 21:40:09 UTC, Jonathan M Davis via Digitalmars-d wrote: True, that is a problem. But if folks really want the warnings, they can go to the extra effort. Why are we making people go to extra effort to get lint-like functionality if we want it to be something that everyone uses? Whether a linter is a separate logical entity within the compiler or a library that can be hooked into, it should be on by default. The problem that some of what gets warned about is _not_ actually a problem. If it always were, it would be an error. So, unless you have control over exactly what gets warned about and have the ability to disable the warning in circumstances where it's wrong, it makes no sense to have the warnings, because you're forced to treat them as errors and always fix them, even if the fix is unnecessary. If the compiler provides that kind of control, then fine, it can have warnings, but dmd doesn't and won't, because Walter doesn't want it to have a vast assortment of flags to control anything (warnings included). That being the case, it makes no sense to put the warnings in the compiler. With a lint tool however, you can configure it however you want (especially because there isn't necessarily one, official tool, making it possible to have a lint tool that does exactly what you want for your project). It's not tied to what the language itself requires, making it much more sane as a tool for giving warnings. The compiler tends to have to do what fits _everyone's_ use case, and that just doesn't work for warnings. Putting warnings in the compiler always seems to result in forcing people to change their code to make the compiler shut up about something that is perfectly fine. - Jonathan M Davis
Re: Reopening the debate about non-nullable-by-default: initialization of member fields
On Sat, 03 May 2014 00:50:14 + Idan Arye via Digitalmars-d digitalmars-d@puremagic.com wrote: We are all sick and tired of this debate, but today I've seen a question in Stack Exchange's Programmers board that raises a point I don't recall being discussed here: http://programmers.stackexchange.com/questions/237749/how-do-languages-with-maybe-types-instead-of-nulls-handle-edge-conditions Consider the following code: class Foo{ void doSomething(){ } } class Bar{ Foo foo; this(Foo foo){ doSomething(); this.foo=foo; } void doSomething(){ foo.doSomething(); } } Constructing an instance of `Bar`, of course, segfaults when it calls `doSomething` that tries to call `foo`'s `doSomething`. The non-nullable-by-default should avoid such problems, but in this case it doesn't work since we call `doSomething` in the constructor, before we initialized `foo`. Yeah, I brought this up before, and it's one of the reasons why I'm against non-nullable by default. It means that class references will need to be treated the same as structs whose init property is disabled, which can be _very_ limiting. And I don't know if we currently handle structs with disabled init properties correctly in all cases, since it's not all that hard for something subtle to have been missed and allow for such a struct to be used before it was actually initialized (and the fact that not much code uses them would make it that much more likely that such a bug would go unnoticed). Hopefully, all those issues have been sorted out by now though. If so, then I would think that we already have all of the rules in place for how non-nullable references would be dealt with with regards to initialization, but they'd still be very limiting, because most of D expects that all types have an init property. - Jonathan M Davis
Re: DIP(?) Warning to facilitate porting to other archs
On Saturday, 3 May 2014 at 01:17:36 UTC, Jonathan M Davis via Digitalmars-d wrote: The problem that some of what gets warned about is _not_ actually a problem. If it always were, it would be an error. So, unless you have control over exactly what gets warned about and have the ability to disable the warning in circumstances where it's wrong, it makes no sense to have the warnings, because you're forced to treat them as errors and always fix them, even if the fix is unnecessary. If the compiler provides that kind of control, then fine, it can have warnings, but dmd doesn't and won't, because Walter doesn't want it to have a vast assortment of flags to control anything (warnings included). That being the case, it makes no sense to put the warnings in the compiler. With a lint tool however, you can configure it however you want (especially because there isn't necessarily one, official tool, making it possible to have a lint tool that does exactly what you want for your project). It's not tied to what the language itself requires, making it much more sane as a tool for giving warnings. The compiler tends to have to do what fits _everyone's_ use case, and that just doesn't work for warnings. Putting warnings in the compiler always seems to result in forcing people to change their code to make the compiler shut up about something that is perfectly fine. - Jonathan M Davis I'm not arguing for warnings in the compiler. If we agree that a linter is a good thing that everyone should use, then we should make it as easy as possible to use it - including having it on by default. It's fine if it's customizable, disable-able, etc. Then the users that want to tweak its behaviour or go without can do so. As for what having it on by default means, that's up for debate. Currently, only the determined can use, for example, DScanner, as they have to clone the Github repo, compile it, and then set it up to use with their editor of choice. DScanner hasn't become a de-facto standard yet, or officially blessed, of course, but as soon as that happens, rapid action needs to be taken to ensure that it is as painless as possible to use and enabled by default, preferably transparent to the casual or uninformed user.
Re: More radical ideas about gc and reference counting
I forgot to add these comments by walter at the top of my previous post: [walter Bright wrote ] the thing is, GC is a terrible and unreliable method of managing non-memory resource lifetimes. Destructors for GC objects are not guaranteed to ever run. So now it looks like dynamic arrays also can't contain structs with destructors :o). -- Andrei Nick
Re: More radical ideas about gc and reference counting
[ monarch_dodra wrote ] Well, that's always been the case, and even worst, since in a dynamic array, destructor are guaranteed to *never* be run. https://issues.dlang.org/show_bug.cgi?id=2757 Resource Management. A issue that has been discussed since 2009, and still no *GOOD* solution. Look at these arguements made back then. email 23 Mar 2009 from the D.d list. Subject : Re: new D2.0 + C++ language. Sat, 21 Mar 2009 20:16:07 -0600, Rainer Deyke wrote: Sergey Gromov wrote: I think this is an overstatement. It's only abstract write buffers where GC really doesn't work, like std.stream.BufferedFile. In any other resource management case I can think of GC works fine. OpenGL objects (textures/shader programs/display lists). SDL surfaces. Hardware sound buffers. Mutex locks. File handles. Any object with a non-trivial destructor. Any object that contains or manages one of the above. Many of the above need to be released in a timely manner. For example, it is a serious error to free a SDL surface after closing the SDL video subsystem, and closing the SDL video subsystem is the only way to close the application window under SDL. Non-deterministic garbage collection cannot work. Others don't strictly need to be released immediately after use, but should still be released as soon as reasonably possible to prevent resource hogging. The GC triggers when the program is low on system memory, not when the program is low on texture memory. By my estimate, in my current project (rewritten in C++ after abandoning D due to its poor resource management), about half of the classes manage resources (directly or indirectly) that need to be released in a timely manner. The other 50% does not need RAII, but also wouldn't benefit from GC in any area other than performance. The language set up the defaults when these are to run. The programmer has to override the defaults. [Sure this crude, but it deterministic] [comment by dsimcha inm 2009 ] Come to think of it, as simple and kludgey sounding as it is, this is an incredibly good idea if you have an app that does a lot of sitting around waiting for input, etc. and therefore not allocating memory and you want an easy way to make sure it releases resources in a reasonable amount of time. This belongs in an FAQ somewhere.
Re: Unresolved external symbol
On Thursday, 1 May 2014 at 22:23:22 UTC, Ga wrote: And I am getting a error LNK2019: unresolved external symbol GetDeviceCaps referenced in function _Dmain have you linked gdi32.lib?
Re: Unresolved external symbol
On Friday, 2 May 2014 at 06:07:48 UTC, evilrat wrote: On Thursday, 1 May 2014 at 22:23:22 UTC, Ga wrote: And I am getting a error LNK2019: unresolved external symbol GetDeviceCaps referenced in function _Dmain have you linked gdi32.lib? Thanks a lot, I wasn't sure what to link with, I must've overlooked it on msdn. Thanks once more
const ref parameters and r-value references
I'm in the process of learning/practicing D and I noticed something that seems peculiar coming from a C++ background: If I compile and run: void fun(const ref int x) { //Stuff } unittest { fun(5); //Error! Does not compile } I get the specified error in my unit test. I understand that the cause is that I've attempted to bind ref to an r-value, what's curious is that in C++, the compiler realizes that this is a non-issue because of 'const' and just 'makes it work'. Is there a rationale behind why D does not do this? Is there a way to write 'fun' such that it avoids copies but still pledges const-correctness while also allowing r-values to be passed in? Thanks in advance!
formattedWrite writes nothing
class MyClass { Appender!string _stringBuilder; this() { _stringBuilder = Appender!string(null); _stringBuilder.clear(); } @property string str() { return _stringBuilder.data; } void append(string s) { formattedWrite(_stringBuilder, %s, s); } } MyClass c = new MyClass(); c.append(text 1); c.append(__222); writeln(c.str); //in this case nothing is printed out Following workarounds work: 1) call _stringBuilder.put() instead of formattedWrite() 2) if _stringBuilder.clear() is omitted in the constructor, formattedWrite(...) will work as expected. Is it a bug or is there a reason for such behaviour?
Re: formattedWrite writes nothing
On 5/2/14, ref2401 via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote: class MyClass { Appender!string _stringBuilder; this() { _stringBuilder = Appender!string(null); _stringBuilder.clear(); Ouch, ouch, ouch! What's happening is that the 'clear' Appender method is only compiled-in if the data is mutable, otherwise you end up calling the object.clear UFCS function. So don't use clear here. I don't know if this is a case of poor method naming or another downside of UFCS. Luckily 'clear' is being renamed to 'destroy' in the object module, so this specific case will not become a problem in the future.
Re: const ref parameters and r-value references
On Friday, 2 May 2014 at 08:17:09 UTC, Mark Isaacson wrote: I'm in the process of learning/practicing D and I noticed something that seems peculiar coming from a C++ background: If I compile and run: void fun(const ref int x) { //Stuff } unittest { fun(5); //Error! Does not compile } I get the specified error in my unit test. I understand that the cause is that I've attempted to bind ref to an r-value, what's curious is that in C++, the compiler realizes that this is a non-issue because of 'const' and just 'makes it work'. Is there a rationale behind why D does not do this? Is there a way to write 'fun' such that it avoids copies but still pledges const-correctness while also allowing r-values to be passed in? There is `auto ref`, but it only works for templates and is somewhat different: void fun()(auto ref const int x) { // Stuff } unittest { fun(5);// pass by value int a = 5; fun(a);// pass by ref } It generates two functions, with and without ref respectively. Allowing rvalues to bind to ref (not only const) has been discussed on several occasions, but I don't remember the outcome. Here is one discussion: http://forum.dlang.org/thread/ntsyfhesnywfxvzbe...@forum.dlang.org
Re: formattedWrite writes nothing
On Friday, 2 May 2014 at 10:23:03 UTC, Andrej Mitrovic via Digitalmars-d-learn wrote: Ouch, ouch, ouch! What's happening is that the 'clear' Appender method is only compiled-in if the data is mutable, otherwise you end up calling the object.clear UFCS function. So don't use clear here. I don't know if this is a case of poor method naming or another downside of UFCS. Luckily 'clear' is being renamed to 'destroy' in the object module, so this specific case will not become a problem in the future. I'd say clear should be @disabled in Appender for non-mutable data.