Re: D future ...
On 12/21/2016 7:57 PM, Chris Wright wrote: You can implement write barriers as runtime calls, but omit them in @nogc code. @nogc code is code that doesn't allocate from the gc. It can still write to gc allocated objects, however, so that idea won't work. However, this would be costly -- it's an expensive technique in general; the current GC mallocs each object instead of mmaping a range of memory; and in D you can't move heap objects safely, You can if using a "mostly copying" generational collector, which is what I did long ago. It works. so you can't distinguish generations based on pointers (you'd have to mark GC data structures, and it's O(log n) to find the right one). You can implement write barriers with mprotect. However, this won't give you good granularity. You just know that someone wrote something to an 8 kilobyte block of memory that has a pointer in it somewhere. This requires the GC to use mmap instead of malloc, and it is strongly encouraged not to put pointer-free objects in the same page as objects with pointers. Using mprotect works, and I wrote a generational collector using it long ago, but it is even slower.
Re: D future ...
On 12/21/2016 6:50 AM, thedeemon wrote: Have you seen this one? http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html Although I had called them write gates, write barriers are the same thing. Yes, that's the problem with implementing a generational collector in D. I once tried to implement write barriers by using the hardware VM system. I'd mark the old generation pages as read-only. When the program would write to those pages, a seg fault happened. This would then run a handler in the GC code which would mark that page as "dirty", then write-enable the page, and restart the program at the point where it seg faulted. This worked great. The only trouble was that the seg faulting path at runtime was so slow it ruined the speed advantage of not have write barriers. So I had to abandon it. But that was long ago. Maybe the tradeoff is better these days with modern hardware. But I suspect that if other GC developers are not using this technique, it is still too slow.
Re: D future ...
On 12/21/2016 3:36 AM, thedeemon wrote: Bad news: without complete redesign of the language and turning into one more C++/CLI (where you have different kinds of pointers in the language for GC and non-GC), having C performance and Go-style low-pause GC is not really possible. You have to choose one. Go chose GC with short pauses but paid with slow speed overall and slow C interop. D chose C-level performance but paid for it with a slow GC. The trouble with a better GC is it usually entails changing the code generator to emit a "write gate" that goes along with assignments via a pointer. This write gate signals the GC that a particular block is being written to, so that block can be marked as "dirty". (Paging virtual memory systems do this automatically.) What this implies is better GC performance comes at a cost of worse performance of the non-GC code. This strategy is effective for a language that makes very heavy use of the GC (like Java does), but for a language like D that uses the GC lightly, it's a much more elusive benefit.
Re: ModuleInfo, factories, and unittesting
On 12/21/2016 9:43 AM, Johannes Pfau wrote: You need some kind of linker support to do this to provide the start/end symbols. That's partially correct. I've done this for decades by having the compiler itself emit those symbols. There are other tricks one can do, such as putting the pointers into the exception handler tables sections.
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 23:33:50 UTC, Jonathan M Davis wrote: Definitely. It is almost always the case that building a program with dmd is much faster than building with gdc or ldc. The tradeoff is that gdc and ldc do a much better job optimizing the resultant binary. So, with dmd, you get fast compilation but a somewhat slower binary, whereas with gdc and ldc, you get slow compilation but a faster binary. If anyone is seeing dmd compile anything significantly more slowly than gdc or ldc, then dmd has a bug, and it should be reported (though reducing the code to something reportable can be entertaining; fortunately, dustmite can be a big help with that). - Jonathan M Davis That is very true for regular build, but not especially for optimized builds.
Re: D future ...
> Library Standardization: > > > Some of the proposals sounds very correct. The library needs to be > split. Every module needs its own GIT. People need to be able to add > standard modules ( after approval ). I can't agree with you there. There are a lot of dependencies between modules. > No offense but where is the standard database library for D? There is > none. That is just a load of bull. Anybody who wants to program in any > language expect the language to have a standard database library! Not > that you need to search the packages for a standard library. I have seen > one man projects that have more standard library support then D. Go, Java, and C# each have database _interfaces_ in their standard libraries. Most other languages don't. These interfaces don't let you connect to any database. You still need a library to connect to a specific database. With the rise of document databases and alternate query systems, it's harder to define a generic database library. It's still possible to write one for SQL databases. > I do not use 3th party packages The standard library needs a rigorous approval process. That takes time and human attention for the review and to address concerns identified during review. D is marginal, and that means we simply don't have the time to get these things done. > Documentation: > -- > > I do not use it. Its such a mess to read with long paragraphs Long paragraphs describing the details of what a thing does is generally a good thing. MSDN's documentation is like that. The "mess" part is when we have five hundred identifiers documented in one web page. dpldocs.info is much better about this. > and a LOT of stuff not even documented. There's no excuse for that. > This automated documentation generation is the same i see in other > "new" languages. The documentation is hand-written. We don't have a tool capable of annotating, say, std.string.detab with "Replace each tab character in s with the number of spaces necessary to align the following character at the next tab stop." without human intervention. That would be impressive, though. The HTML is generated by a tool. > Editor support: > --- > > What a struggle. Visual Studio C is probably the editor with the best > 3th party support. VS Code isn't terrible. It's got the disadvantage that it tries to autocomplete every possible thing -- so you get __monitor and __vptr first, and the other options include things you can't access. > Too many need 3th party to do something that D needs to support from > itself: > > dcd - Used for auto completion > dfmt - Used for code formatting > dscanner - Used for static code linting ... > > This needs to be in the default installation of dmd! It makes no sense > that these are not included. >From a usability standpoint, I agree. >From a political standpoint, it's unsurprising that they're not included. > Future: > > > You want D to have traction. That's *a* goal, but it's not the goal that's closest to most of our hearts, I think. Popularity doesn't exactly bring enjoyment -- it can remove some annoyances, but it's easier to work around those annoyances. It isn't fulfilling. And it doesn't pay the bills. > Marketing, more library support, less focus > on adding even more advanced features, fixing issues ( > like better GC ), There are huge barriers to bringing D's GC to the state of the art. Some improvements could be made, though. For instance, the GC currently scans heap objects scattered about the whole heap. Using separate virtual address ranges for objects with pointers and objects without might improve efficiency somewhat. > CTFE ( Stefan is dealing with that ), Documentation, > better Editor support... I think code-d could potentially be extended to install its dependencies, which would improve the situation there. > Walter / Andrei: > > > No offense guys, just something that i see in a lot of posts. The > hinting at people to add more to the standard libraries. That little > push. But frankly, its annoying when nothing gets done. > > People complain about x feature. You tell people to add to the standard > library or bring the project in. But anybody who has ever read this > forum sees how adding things to the language is LONG process and a lot > of times idea's get shot down very fast. The language is *very* conservative. The standard library is merely cautious. But since there's no cost to adding a package to dub, that doesn't prevent code from being written and reused. Except by those who refuse to go outside the standard library, and for those people, I would recommend Java or C#. > For the standard library there is no process as far as i can tell. > Experimental at best, where code seems to have a nice long death. It could be much better documented. The process for small changes: * Make the change. * Make a pull request. The process f
Re: DIP 1007 - keywords as identifiers with an escape symbol - feedback
On Mon, 19 Dec 2016 09:58:28 +, default0 wrote: > On Monday, 19 December 2016 at 08:30:07 UTC, Stefan Koch wrote: >> If you are prepending # you might as well prepend _ > > That doesn't solve the complications this introduces if you want to > serialize to/from members with these names, as seen in the Examples > section of the DIP. Sure, but then we could argue that there should be a way to make any sequence of characters an identifier. There are systems that expect serialized field names with hyphens, for instance. Or if I'm writing code to interoperate with a Cherokee system, I might need to produce JSON with a field "ᏗᏍᏚᏗ". (Which is perfectly valid in C# but not in D.) Jsonizer does this the only way you can: you can use an attribute to specify the serialized name for a field. While we're talking about this, this proposal breaks any string mixin that mixes in an identifier detected with reflection. Address that problem and your proposal just allows identifiers to contain a leading #. There are usually ways around using string mixins, but we haven't even proposed starting a deprecation process for them.
Re: D future ...
On Thursday, 22 December 2016 at 03:57:10 UTC, Chris Wright wrote: You can implement write barriers as runtime calls, but omit them in @nogc code. That means redefining what @nogc means. Currently it basically means "does not GC-allocate" and you want to change it to "does not mutate GC-allocated objects" which is very different and hardly possible to check in compiler without further changing the language. However, this would be costly -- it's an expensive technique in general; Yep, that's what I'm saying. Fast GC has a price on the rest code speed. Fast code has a price on GC. Unless you separate them very clearly by language means.
Re: D future ...
On Wed, 21 Dec 2016 11:36:14 +, thedeemon wrote: > Bad news: without complete redesign of the language and turning into one > more C++/CLI (where you have different kinds of pointers in the language > for GC and non-GC), having C performance and Go-style low-pause GC is > not really possible. You have to choose one. Go chose GC with short > pauses but paid with slow speed overall and slow C interop. D chose > C-level performance but paid for it with a slow GC. You can implement write barriers as runtime calls, but omit them in @nogc code. However, this would be costly -- it's an expensive technique in general; the current GC mallocs each object instead of mmaping a range of memory; and in D you can't move heap objects safely, so you can't distinguish generations based on pointers (you'd have to mark GC data structures, and it's O(log n) to find the right one). You can implement write barriers with mprotect. However, this won't give you good granularity. You just know that someone wrote something to an 8 kilobyte block of memory that has a pointer in it somewhere. This requires the GC to use mmap instead of malloc, and it is strongly encouraged not to put pointer-free objects in the same page as objects with pointers.
Re: D future ...
On Tue, 20 Dec 2016 08:20:32 +, LiNbO3 wrote: > And have the patch wait in the PR queue until the end of time, > not even acknowledged at all ? When I've put in PRs for doc improvements, they've been reviewed relatively quickly.
Re: Red Hat's issues in considering the D language
On Thursday, 22 December 2016 at 03:18:42 UTC, Jerry wrote: Not using AliasSeq if that's what you mean. I don't know if the "tupleof" for a struct would be considered the same as "T..." but basically what I was doing: foreach(ref field; myLargeStruct.tupleof) { } Yes that is a compiler-tuple as well. which means that the foreach is not loop at all. Rather it's body gets duplicated myLargeStruct.tupleof.length times. Leading to giant numbers statements, at those conditions N^{2,3,N} algorithms in the optimizer do not scale gracefully.
Re: Red Hat's issues in considering the D language
On Thursday, 22 December 2016 at 02:34:48 UTC, Stefan Koch wrote: On Thursday, 22 December 2016 at 02:32:30 UTC, Jerry wrote: On Thursday, 22 December 2016 at 01:57:55 UTC, safety0ff wrote: On Thursday, 22 December 2016 at 01:30:44 UTC, Andrei Alexandrescu wrote: Must be a pathological case we should fix anyway. -- Andrei Likely related bug has been open 5 years minus 1 day: https://issues.dlang.org/show_bug.cgi?id=7157 Yup looks like that was the cause. Removed some of the functions that did a "foreach()" over some large tuples. Down to 26 seconds with that removed. tuples as in compiler tuples ? The T... kind ? Not using AliasSeq if that's what you mean. I don't know if the "tupleof" for a struct would be considered the same as "T..." but basically what I was doing: foreach(ref field; myLargeStruct.tupleof) { }
Re: Red Hat's issues in considering the D language
On Thursday, 22 December 2016 at 02:32:30 UTC, Jerry wrote: Yup looks like that was the cause. Removed some of the functions that did a "foreach()" over some large tuples. Down to 26 seconds with that removed. Also: https://issues.dlang.org/show_bug.cgi?id=2396
Re: Red Hat's issues in considering the D language
On Thursday, 22 December 2016 at 02:32:30 UTC, Jerry wrote: On Thursday, 22 December 2016 at 01:57:55 UTC, safety0ff wrote: On Thursday, 22 December 2016 at 01:30:44 UTC, Andrei Alexandrescu wrote: Must be a pathological case we should fix anyway. -- Andrei Likely related bug has been open 5 years minus 1 day: https://issues.dlang.org/show_bug.cgi?id=7157 Yup looks like that was the cause. Removed some of the functions that did a "foreach()" over some large tuples. Down to 26 seconds with that removed. tuples as in compiler tuples ? The T... kind ?
Re: Red Hat's issues in considering the D language
On Thursday, 22 December 2016 at 01:57:55 UTC, safety0ff wrote: On Thursday, 22 December 2016 at 01:30:44 UTC, Andrei Alexandrescu wrote: Must be a pathological case we should fix anyway. -- Andrei Likely related bug has been open 5 years minus 1 day: https://issues.dlang.org/show_bug.cgi?id=7157 Yup looks like that was the cause. Removed some of the functions that did a "foreach()" over some large tuples. Down to 26 seconds with that removed.
Re: Red Hat's issues in considering the D language
On Thursday, 22 December 2016 at 01:30:44 UTC, Andrei Alexandrescu wrote: Must be a pathological case we should fix anyway. -- Andrei Likely related bug has been open 5 years minus 1 day: https://issues.dlang.org/show_bug.cgi?id=7157
Re: Red Hat's issues in considering the D language
On Thursday, December 22, 2016 00:59:27 hardreset via Digitalmars-d wrote: > On Wednesday, 21 December 2016 at 18:33:52 UTC, Brad Anderson > >> Moving the reference compiler to LLVM as was suggested in the > >> list. > > > > I've never been able to understand why it matters. > > Cause people think LDC is better and it would be a big win if > everyone focused just on that. It's not about which has "official > compiler" slapped on it, it's about where the development effort > is focused. Most of the focus is on the frontend, not any of the backends. So, most of the work is automatically shared across all of the compilers. It's just that the frontend isn't 100% compiler agnostic (though work has been done to get it there), so some work has to be done to get it and the glue layer updated, and dmd gets that first. LDC isn't far behind though. GDC's main problem is the hump in getting from the frontend being in C++ to it being in D, and once they've got that sorted out, I expect that they'll be _much_ faster at updating. Regardless, most of the effort is going towards stuff that has nothing to do with the compiler backend. - Jonathan M Davis
Re: Red Hat's issues in considering the D language
On 12/21/16 7:09 PM, Jerry wrote: On Wednesday, 21 December 2016 at 21:27:57 UTC, Jack Stouffer wrote: On Wednesday, 21 December 2016 at 21:12:07 UTC, Jerry wrote: Any other backend would be better. DMD with -O takes over an hour for my project to compile. In comparison LDC with -O3 takes less than a minute and produces a faster binary. It doesn't really make sense to increase the workload maintaining 2-3 different compilers when D is already lacking manpower. A 60:1 speedup? I've never heard of that big of a difference before. Especially since LDC is typically slower to compile, even on massive code bases like Weka's. Could you please file a bug with some details? I ran it again, was a bit over a minute. But still 1 min 30 seconds compared to an hour. 1:07:40.162314 -- dmd with -O 0:01:28.632916 -- ldc2 with -O 0:00:23.802639 -- dmd without -O 0:00:33.818080 -- ldc2 without -O It'd be quite a bit of work to narrow down what it is and if it has something to do with how many structures I use or otherwise. I'd have to try and emulate that with test code as I can't use my code. Then the issue would just sit there for who knows how long. It's not that big of an issue, as I just use ldc2 instead anyways. Would be great to narrow this down regardless. Shouldn't be too difficult since the penalty is so huge. Must be a pathological case we should fix anyway. -- Andrei
Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback
On 12/21/16 6:40 PM, Timothee Cour via Digitalmars-d wrote: Andrei: ping on this? (especially regarding allowing `:`) I think "lazy" is a bit too cute. "with" is so close to what's actually needed, it would be a waste to not use it. Generally I'm weary of the use of ":" (never liked it - it makes code dependent on long-distance context) so I'd rather snatch the opportunity to avoid it. Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 18:33:52 UTC, Brad Anderson wrote: On Wednesday, 21 December 2016 at 16:41:56 UTC, hardreset wrote: On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now? Moving the reference compiler to LLVM as was suggested in the list. I've never been able to understand why it matters. Cause people think LDC is better and it would be a big win if everyone focused just on that. It's not about which has "official compiler" slapped on it, it's about where the development effort is focused. That said I dont care really, I was just curious what the solution was to the closed source back end was.
Re: Red Hat's issues in considering the D language
On Wednesday, December 21, 2016 18:49:43 Johannes Pfau via Digitalmars-d wrote: > Am Wed, 21 Dec 2016 08:18:48 -0500 > > schrieb Andrei Alexandrescu : > > On 12/20/16 6:08 PM, Andrei Alexandrescu wrote: > > > Hello, a few engineers at Red Hat are taking a look at using the D > > > language on the desktop and have reached out to us. They have > > > created a list of issues. We are on the top-level ones, and of > > > course would appreciate any community help as well. > > > > > > https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b > > > > An engineer from Debian wrote down what's needed on the distribution > > side to give a green light to the D language: > > > > https://gist.github.com/ximion/fe6264481319dd94c8308b1ea4e8207a > > > > > > Andrei > > "GDC does not support creating shared libraries at time, which is a big > deal for distros which need it to reduce duplicate code and make > security fixes easier." > > You can cross that one off the list. > > "GDC only supports an ancient version of the D standard library, which > has many nice classes and also bugfixes missing." > > We're at 2.068.2 now. Still old, but good enough to run the latest > vibe.D release. Well, that's quite old at this point, and many programs will not build with it. vibe.d is a bit abnormal in that it tries to compile with several releases, whereas most projects tend to just use the latest. So, I think that the complaint that "GDC only supports an ancient version of the D standard library" is completely justified. At this point, if you want to compile with GDC, you pretty much need to target it and/or version portions of your code for different compilers or different compiler/library versions (which is what the vibe.d guys go to the extra effort of doing but very few D developers do). I fully expect that GDC will eventually catch up, but unfortunately, until it does, for many projects, it's useless. And I'd honestly recommend to people to avoid it until it does catch up, since otherwise, they're just going to run into compatability problems, and when they ask questions on SO or in the forums about what does or doesn't work, they're going to have problems due to differences in what does and doesn't work with GDC vs dmd and ldc. - Jonathan M Davis
Re: Red Hat's issues in considering the D language
On Wednesday, December 21, 2016 15:46:19 Gerald via Digitalmars-d wrote: > Given that DMD is a non-starter for Linux packages, how feasible > is it to simply deprecate GDC and declare LDC as the > reference/production compiler for D? DMD could become the > experimental/future facing compiler used to evolve D as a > language but not meant to be used for production code. This would > resolve the non-free aspect of DMD as well as the ABI issue > between compilers. Anyone who wants to use ldc can use ldc. It doesn't need to be the reference compiler for that. And unlike gdc, it's actually pretty close to dmd. So, there should be no problem with folks using ldc for production right now if they want to. - Jonathan M Davis
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 21:27:57 UTC, Jack Stouffer wrote: On Wednesday, 21 December 2016 at 21:12:07 UTC, Jerry wrote: Any other backend would be better. DMD with -O takes over an hour for my project to compile. In comparison LDC with -O3 takes less than a minute and produces a faster binary. It doesn't really make sense to increase the workload maintaining 2-3 different compilers when D is already lacking manpower. A 60:1 speedup? I've never heard of that big of a difference before. Especially since LDC is typically slower to compile, even on massive code bases like Weka's. Could you please file a bug with some details? I ran it again, was a bit over a minute. But still 1 min 30 seconds compared to an hour. 1:07:40.162314 -- dmd with -O 0:01:28.632916 -- ldc2 with -O 0:00:23.802639 -- dmd without -O 0:00:33.818080 -- ldc2 without -O It'd be quite a bit of work to narrow down what it is and if it has something to do with how many structures I use or otherwise. I'd have to try and emulate that with test code as I can't use my code. Then the issue would just sit there for who knows how long. It's not that big of an issue, as I just use ldc2 instead anyways.
Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback
On Mon, Dec 19, 2016 at 9:33 PM, Timothee Cour wrote: > what about using `lazy` instead of `with`: > > `with(import foo)` > => > `lazy(import foo)` > > advantages: > * avoids confusion regarding usual scoping rules of `with` ; > * conveys that the import is indeed lazy > > Furthermore (regardless of which keyword is used), what about allowing `:` > ``` > // case 1 > lazy(import foo) > void fun(){} > > // case 2 > lazy(import foo) { > void fun(){} > } > > // case 3 : this is new > lazy(import foo): > void fun1(){} > void fun2(){} > ``` > > advantages: > > * same behavior as other constructs which don't introduce a scope: > ``` > // case 1, 2 3 are allowed: > version(A): > static if(true): > private: > void fun(){} > ``` > > * avoids nesting when case 3 is used (compared to when using `{}`) > > * I would argue that grouping lazy imports is actually a common case; > without case 3, the indentation will increase. > Andrei: ping on this? (especially regarding allowing `:`)
Re: Red Hat's issues in considering the D language
On Wednesday, December 21, 2016 22:05:32 Yuxuan Shui via Digitalmars-d wrote: > On Wednesday, 21 December 2016 at 21:12:07 UTC, Jerry wrote: > > On Wednesday, 21 December 2016 at 16:41:58 UTC, Jesse Phillips > > > > wrote: > >> On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier > >> > >> wrote: > >>> [...] > >> > >> People that want to use D, want to use the latest and > >> greatest. The reference compiler moves the fastest so they > >> want the reference compiler to be switched to a different > >> backend. Why a FOSS back end is required to use D depends on > >> the person, usually it is political. > > > > Any other backend would be better. DMD with -O takes over an > > hour for my project to compile. In comparison LDC with -O3 > > takes less than a minute and produces a faster binary. It > > doesn't really make sense to increase the workload maintaining > > 2-3 different compilers when D is already lacking manpower. > > That sounds like a bug in the DMD backend... Definitely. It is almost always the case that building a program with dmd is much faster than building with gdc or ldc. The tradeoff is that gdc and ldc do a much better job optimizing the resultant binary. So, with dmd, you get fast compilation but a somewhat slower binary, whereas with gdc and ldc, you get slow compilation but a faster binary. If anyone is seeing dmd compile anything significantly more slowly than gdc or ldc, then dmd has a bug, and it should be reported (though reducing the code to something reportable can be entertaining; fortunately, dustmite can be a big help with that). - Jonathan M Davis
Re: DIP 1007 - keywords as identifiers with an escape symbol - feedback
On Monday, 19 December 2016 at 10:28:31 UTC, Basile B. wrote: On Monday, 19 December 2016 at 09:58:28 UTC, default0 wrote: That doesn't solve the complications this introduces if you want to serialize to/from members with these names, as seen in the Examples section of the DIP. Yes it does. See my answer to Stefan. In the code you write #delegate, but the identifier is, as known by the compiler, just "delegate". See the unit tests that passed already several times: https://github.com/dlang/dmd/pull/6324/files#diff-60ac3d231ebb78f79477cc2520a37200R19 Actually the second example didn't work. I've updated the DIP and added (and this time tested...) an archaic serialization system that shows more clearly what's the point. It should be quite straightforward to test. The lexer is rarely modified so I doubt there's ever be any conflict when rebasing to master.
Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback
On 12/20/2016 09:31 AM, Dmitry Olshansky wrote: On 12/13/16 11:33 PM, Andrei Alexandrescu wrote: Destroy. https://github.com/dlang/DIPs/pull/51/files Andrei Just a thought but with all of proliferation of imports down to each declaration comes the pain that e.g. renaming a module cascades to countless instances of import statements. This is true of local imports as well but the problem gets bigger. https://github.com/dlang/DIPs/pull/51/commits/d4ef6826dacedc38f822e48bec2186d93040fb42 Andrei
Re: Improvement in pure functions specification
On 12/21/2016 04:42 PM, Johan Engelen wrote: On Wednesday, 21 December 2016 at 21:34:04 UTC, Andrei Alexandrescu wrote: On 12/21/2016 03:04 PM, Johan Engelen wrote: Super contrived, but I hope you get my drift: ``` int *awesome() pure { static if (ohSoAwesome) { return new int; } else { return null; } } ``` Where does ohSoAwesome come from? A random bool. Perhaps something like this: ``` version(LDC) ohSoAwesome = true; else ohSoAwesome = false ``` Well randomness is not available in pure functions. Anyhow I've reformulated the wording and added an example. The sheer fact it works is pretty awesome. https://github.com/dlang/dlang.org/pull/1528 For now I didn't want to give thrown values any special treatment, i.e. maximum freedom for the implementation. Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 21:12:07 UTC, Jerry wrote: On Wednesday, 21 December 2016 at 16:41:58 UTC, Jesse Phillips wrote: On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: [...] People that want to use D, want to use the latest and greatest. The reference compiler moves the fastest so they want the reference compiler to be switched to a different backend. Why a FOSS back end is required to use D depends on the person, usually it is political. Any other backend would be better. DMD with -O takes over an hour for my project to compile. In comparison LDC with -O3 takes less than a minute and produces a faster binary. It doesn't really make sense to increase the workload maintaining 2-3 different compilers when D is already lacking manpower. That sounds like a bug in the DMD backend...
Re: Improvement in pure functions specification
On Wednesday, 21 December 2016 at 21:34:04 UTC, Andrei Alexandrescu wrote: On 12/21/2016 03:04 PM, Johan Engelen wrote: Super contrived, but I hope you get my drift: ``` int *awesome() pure { static if (ohSoAwesome) { return new int; } else { return null; } } ``` Where does ohSoAwesome come from? A random bool. Perhaps something like this: ``` version(LDC) ohSoAwesome = true; else ohSoAwesome = false ``` ;-) Johan
Re: Improvement in pure functions specification
On Wednesday, 21 December 2016 at 21:34:04 UTC, Andrei Alexandrescu wrote: On 12/21/2016 03:04 PM, Johan Engelen wrote: ``` I don't know what "required to honor all calls" means, but I guess it means ``` auto a = foo(); // int* foo() pure; auto b = foo(); ``` cannot be transformed to ``` auto a = foo(); // int* foo() pure; auto b = a; ``` That is correct. Is that _all_ it is saying? Or is it also saying this: ``` void bar() { auto a = foo(); // int* foo() pure; } // cannot remove the call to bar, because bar calls a pure function and all calls must be "honored" bar(); ```
Re: Improvement in pure functions specification
On 12/21/2016 03:59 PM, John Colvin wrote: On Wednesday, 21 December 2016 at 15:40:42 UTC, Andrei Alexandrescu wrote: On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Dropped the void functions. On to the next scandal: A function that accepts only parameters without mutable indirections and returns a result that has mutable indirections is called a $(I pure factory function). An implementation may assume that all mutable memory returned by the call is not referenced by any other part of the program, i.e. it is newly allocated by the function. Andrei There are 3 levels: 1) no idea what's going on: e.g. the function returns a mutable reference and also reads from global mutable memory. That would be not pure. 2) memory must be new: e.g. returns 2 mutable references, no accessing external mutable memory. Yah, they could refer one another. 3) memory must be new and uniquely referenced: function returns 1 mutable reference, does not access external mutable memory. Yah. If I'm not mistaken only 3 enables anything useful like implicit casts to immutable. The formulation is careful to not specify what can be done. For now "not referenced by any other part of the program" nicely covers 2 and 3. Also, "returned references" should be extended to include "out" parameters, because there's no difference as far as memory uniqueness is concerned. Cool idea. Andrei
Re: Improvement in pure functions specification
On 12/21/2016 03:10 PM, Johan Engelen wrote: On Wednesday, 21 December 2016 at 20:04:04 UTC, Johan Engelen wrote: "Any `pure` function that is not strongly pure _may not be assumed to be_ memoizable." That version of mine is also not correct :( How about: "A strongly pure function can be assumed to be memoizable. For a not strongly pure function, well, `pure` does not add information regarding memoizability." OK save for the colloquial "well". -- Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 21:12:07 UTC, Jerry wrote: Any other backend would be better. DMD with -O takes over an hour for my project to compile. In comparison LDC with -O3 takes less than a minute and produces a faster binary. It doesn't really make sense to increase the workload maintaining 2-3 different compilers when D is already lacking manpower. A 60:1 speedup? I've never heard of that big of a difference before. Especially since LDC is typically slower to compile, even on massive code bases like Weka's. Could you please file a bug with some details?
Re: Improvement in pure functions specification
On 12/21/2016 03:04 PM, Johan Engelen wrote: On Wednesday, 21 December 2016 at 15:40:42 UTC, Andrei Alexandrescu wrote: On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Dropped the void functions. On to the next scandal: I think you should be very careful in making sure that `pure` does not become a pessimization keyword. ``` $(P Any `pure` function that is not strongly pure cannot be memoized. The compiler is required to honor all calls to the function, even if it appears to do nothing. (Example: a pure function taking no arguments and returning `int*` cannot be memoized. It also should not because it may be a typical factory function returning a fresh pointer with each call.)) ``` I don't know what "required to honor all calls" means, but I guess it means ``` auto a = foo(); // int* foo() pure; auto b = foo(); ``` cannot be transformed to ``` auto a = foo(); // int* foo() pure; auto b = a; ``` That is correct. Super contrived, but I hope you get my drift: ``` int *awesome() pure { static if (ohSoAwesome) { return new int; } else { return null; } } ``` Where does ohSoAwesome come from? Tagging this function with `pure` would be a pessimization. Not to worry. It's all up to what can be detected. If inlining is in effect then definitely things can be optimized appropriately. Instead of "Any `pure` function that is not strongly pure cannot be memoized." why not "Any `pure` function that is not strongly pure _may not be assumed to be_ memoizable." Got it. Good point. Will do. Another example: ``` /// Note: need to mark this function as non-pure, because otherwise the compiler deduces it as pure and then pessimizes our code. int *bar(bool returnNull) nonpure { if (returnNull) { return null; } else { return new ...; } } auto a = bar(true); auto b = bar(true); auto c = bar(true); ``` My concern with the current wording (like for the void function thing) is that it actively prohibits the compiler to do a transformation even if that is valid. Yah, we kind of assume without stating that whole "observable behavior" that C++ does. Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 16:41:58 UTC, Jesse Phillips wrote: On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now? People that want to use D, want to use the latest and greatest. The reference compiler moves the fastest so they want the reference compiler to be switched to a different backend. Why a FOSS back end is required to use D depends on the person, usually it is political. Any other backend would be better. DMD with -O takes over an hour for my project to compile. In comparison LDC with -O3 takes less than a minute and produces a faster binary. It doesn't really make sense to increase the workload maintaining 2-3 different compilers when D is already lacking manpower.
Re: Improvement in pure functions specification
On Wednesday, 21 December 2016 at 15:40:42 UTC, Andrei Alexandrescu wrote: On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Dropped the void functions. On to the next scandal: A function that accepts only parameters without mutable indirections and returns a result that has mutable indirections is called a $(I pure factory function). An implementation may assume that all mutable memory returned by the call is not referenced by any other part of the program, i.e. it is newly allocated by the function. Andrei There are 3 levels: 1) no idea what's going on: e.g. the function returns a mutable reference and also reads from global mutable memory. 2) memory must be new: e.g. returns 2 mutable references, no accessing external mutable memory. 3) memory must be new and uniquely referenced: function returns 1 mutable reference, does not access external mutable memory. If I'm not mistaken only 3 enables anything useful like implicit casts to immutable. Also, "returned references" should be extended to include "out" parameters, because there's no difference as far as memory uniqueness is concerned.
Re: Improvement in pure functions specification
On Wednesday, December 21, 2016 15:49:35 Stefan Koch via Digitalmars-d wrote: > On Wednesday, 21 December 2016 at 15:40:42 UTC, Andrei > > Alexandrescu wrote: > > On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: > >> https://github.com/dlang/dlang.org/pull/1528 -- Andrei > > > > Dropped the void functions. On to the next scandal: > >>A function that accepts only parameters without mutable > >>indirections and > >>returns a result that has mutable indirections is called a $(I > >>pure factory > >>function). An implementation may assume that all mutable memory > >>returned by > >>the call is not referenced by any other part of the program, > >>i.e. it is > >>newly allocated by the function. > >> > > Andrei > > Couldn't this be folded into : > "The implementation may not remove a call to a pure function if > does allocate memory ?" > > Since there is the concept of weakly pure functions the compiler > cannot decide to remove functions on signature alone. > Meaning the body has to be available for it to even attempt to > elide the call. > > Therefore specifying implementation behavior based on the > function signature is misleading IMO. Why would the function body need to be there to elide the call? Only calls to "strongly" pure functions can be elided when called multiple times, so "weak" purity doesn't enter into the equation. And how "strong" a pure function is has everything to do with its signature and nothing to do with its body. pure has always been designed with the idea that it would be the function signature that mattered. The body only comes into play when inferring purity. What Andrei has put here is to codify what the compiler needs to look at to determine whether a strongly pure function may have allocated and returned that memory (or something that referred to that memory) and made it so that the compiler is not allowed to elide the call in that specific case. There is no need to specifically mention memory allocation unless you're looking to indicate why the spec is saying that such calls cannot be elided. - Jonathan M Davis
Re: Improvement in pure functions specification
On Wednesday, 21 December 2016 at 20:04:04 UTC, Johan Engelen wrote: "Any `pure` function that is not strongly pure _may not be assumed to be_ memoizable." That version of mine is also not correct :( How about: "A strongly pure function can be assumed to be memoizable. For a not strongly pure function, well, `pure` does not add information regarding memoizability."
Re: Improvement in pure functions specification
On Wednesday, 21 December 2016 at 15:40:42 UTC, Andrei Alexandrescu wrote: On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Dropped the void functions. On to the next scandal: I think you should be very careful in making sure that `pure` does not become a pessimization keyword. ``` $(P Any `pure` function that is not strongly pure cannot be memoized. The compiler is required to honor all calls to the function, even if it appears to do nothing. (Example: a pure function taking no arguments and returning `int*` cannot be memoized. It also should not because it may be a typical factory function returning a fresh pointer with each call.)) ``` I don't know what "required to honor all calls" means, but I guess it means ``` auto a = foo(); // int* foo() pure; auto b = foo(); ``` cannot be transformed to ``` auto a = foo(); // int* foo() pure; auto b = a; ``` Super contrived, but I hope you get my drift: ``` int *awesome() pure { static if (ohSoAwesome) { return new int; } else { return null; } } ``` Tagging this function with `pure` would be a pessimization. Instead of "Any `pure` function that is not strongly pure cannot be memoized." why not "Any `pure` function that is not strongly pure _may not be assumed to be_ memoizable." Another example: ``` /// Note: need to mark this function as non-pure, because otherwise the compiler deduces it as pure and then pessimizes our code. int *bar(bool returnNull) nonpure { if (returnNull) { return null; } else { return new ...; } } auto a = bar(true); auto b = bar(true); auto c = bar(true); ``` My concern with the current wording (like for the void function thing) is that it actively prohibits the compiler to do a transformation even if that is valid. -Johan
Re: ModuleInfo, factories, and unittesting
On 2016-12-21 18:43, Johannes Pfau wrote: Back to topic: I'd really love to see a generalization of RTInfo/mixin templates in D: I implemented RTInfo for modules a couple of years ago [1]. Unfortunately it was rejected because it had the same problem as RTInfo, it only works inside object.d. I had some ideas for that as well but it was not liked either. [1] https://github.com/dlang/dmd/pull/2271 -- /Jacob Carlborg
Re: D future ...
On 12/21/2016 6:24 AM, Mark wrote: I do not think that this would be a bad use of the foundation's funds. That is one of the purposes of the Foundation.
Re: Red Hat's issues in considering the D language
On 12/21/2016 12:49 PM, Johannes Pfau wrote: We're at 2.068.2 now. Johannes, are you personally involved with gdc? If so please email me. Thanks! -- Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 18:33:52 UTC, Brad Anderson wrote: On Wednesday, 21 December 2016 at 16:41:56 UTC, hardreset wrote: On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: [...] Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now? Moving the reference compiler to LLVM as was suggested in the list. I've never been able to understand why it matters. You can use LDC or GDC now. Slapping the name "reference compiler" on one of them won't change anything. I think most frontend developers prefer working in the DMD umbrella for speed and simplicity reasons. Editing and building DMD is dead simple. In theory the backend should be completely divorced from the frontend and people would be editing a libd repo or something and there wouldn't be a need for a reference compiler. It will simplify development process for DRuntime, LDC and GDC. In addition, DMD support for numeric libraries requires more efforts and workarounds. DMD is less documented then LLVM (this is important for numeric and betterC libraries) --Ilya
Re: Red Hat's issues in considering the D language
On Wed, Dec 21, 2016 at 06:33:52PM +, Brad Anderson via Digitalmars-d wrote: [...] > In theory the backend should be completely divorced from the frontend > and people would be editing a libd repo or something and there > wouldn't be a need for a reference compiler. Isn't our plan to eventually split the backend from the frontend? But I understand that will be a long process, given the current state of the code. T -- Why are you blatanly misspelling "blatant"? -- Branden Robinson
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 16:41:56 UTC, hardreset wrote: On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now? Moving the reference compiler to LLVM as was suggested in the list. I've never been able to understand why it matters. You can use LDC or GDC now. Slapping the name "reference compiler" on one of them won't change anything. I think most frontend developers prefer working in the DMD umbrella for speed and simplicity reasons. Editing and building DMD is dead simple. In theory the backend should be completely divorced from the frontend and people would be editing a libd repo or something and there wouldn't be a need for a reference compiler.
Re: ModuleInfo, factories, and unittesting
On 12/20/2016 10:36 PM, Walter Bright wrote: > On 12/20/2016 11:05 AM, Dicebot wrote: >> Yes, pretty much. What ways do you have in mind? I am only aware of two: >> >> 1) ModuleInfo >> 2) https://dlang.org/spec/traits.html#getUnitTests > > > Put pointers to them in a special segment. Oh, so you have meant "other ways can be implemented", not "other ways exist"? Sure. It does need to include qualified module names in that info though to preserve existing test runner functionality. signature.asc Description: OpenPGP digital signature
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 17:49:43 UTC, Johannes Pfau wrote: We're at 2.068.2 now. Still old, but good enough to run the latest vibe.D release. Just a quick heads up (and maybe motivation): the upcoming 0.8.0 release will drop the support for 2.068 ;-) https://github.com/rejectedsoftware/vibe.d/commit/ce9c1250aeef97c948787192136e611525c3df3c
Re: Red Hat's issues in considering the D language
Am Wed, 21 Dec 2016 15:46:19 + schrieb Gerald : > Given that DMD is a non-starter for Linux packages, how feasible > is it to simply deprecate GDC and declare LDC as the > reference/production compiler for D? Hey, GDC is still in active development ;-) We need some more time to catch up but we'll get there. OTOH if people start compiling recent D code with compilers from debian stable you'll have to support old frontend versions anyway :-P
Re: D future ...
On Wednesday, 21 December 2016 at 14:50:31 UTC, thedeemon wrote: On Wednesday, 21 December 2016 at 11:54:35 UTC, Ilya Yaroshenko wrote: On Wednesday, 21 December 2016 at 11:36:14 UTC, thedeemon wrote: On Tuesday, 20 December 2016 at 10:18:12 UTC, Kelly Sommers wrote: [...] Bad news: without complete redesign of the language and turning into one more C++/CLI (where you have different kinds of pointers in the language for GC and non-GC), having C performance and Go-style low-pause GC is not really possible. You have to choose one. Go chose GC with short pauses but paid with slow speed overall and slow C interop. D chose C-level performance but paid for it with a slow GC. If this is true, a blog post about it with more details is very welcome --Ilya Have you seen this one? http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html A recent blog post regarding the Go garbage collector with pertinent info: https://medium.com/@octskyward/modern-garbage-collection-911ef4f8bd8e
Re: Red Hat's issues in considering the D language
Am Wed, 21 Dec 2016 08:18:48 -0500 schrieb Andrei Alexandrescu : > On 12/20/16 6:08 PM, Andrei Alexandrescu wrote: > > Hello, a few engineers at Red Hat are taking a look at using the D > > language on the desktop and have reached out to us. They have > > created a list of issues. We are on the top-level ones, and of > > course would appreciate any community help as well. > > > > https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b > > An engineer from Debian wrote down what's needed on the distribution > side to give a green light to the D language: > > https://gist.github.com/ximion/fe6264481319dd94c8308b1ea4e8207a > > > Andrei > "GDC does not support creating shared libraries at time, which is a big deal for distros which need it to reduce duplicate code and make security fixes easier." You can cross that one off the list. "GDC only supports an ancient version of the D standard library, which has many nice classes and also bugfixes missing." We're at 2.068.2 now. Still old, but good enough to run the latest vibe.D release. From a compiler dev point of view I think one of the most important issues is the stable ABI. Many of the compiler specific problems could be solved easily if we could mix code from different compilers.
Re: ModuleInfo, factories, and unittesting
Am Tue, 20 Dec 2016 12:36:53 -0800 schrieb Walter Bright : > On 12/20/2016 11:05 AM, Dicebot wrote: > > Yes, pretty much. What ways do you have in mind? I am only aware of > > two: > > > > 1) ModuleInfo > > 2) https://dlang.org/spec/traits.html#getUnitTests > > > Put pointers to them in a special segment. You need some kind of linker support to do this to provide the start/end symbols. Binutils has got a nice feature to do that automatically for you though: https://sourceware.org/binutils/docs/ld/Orphan-Sections.html " If an orphaned section's name is representable as a C identifier then the linker will automatically see PROVIDE two symbols: __start_SECNAME and __stop_SECNAME, where SECNAME is the name of the section." However, things get a little more complicated when shared libraries are involved. You'll have to make the __start/__stop symbols private to the library (that's easy) and somehow provide a function in every library to access the library private __start/__stop symbols. That's basically how we assemble the list of moduleinfos in rt.sections. Back to topic: I'd really love to see a generalization of RTInfo/mixin templates in D: -- @auto mixin template scanUnittests(Module) { static if (hasUnittest!Module) { static this() { foreach(test, getUnittests!Module) test(); } } } -- Whenever you import a module containing an @auto mixin the compiler would mixin the declaration into the module. This should be incredibly powerful for serialization, std.benchmark and all kind of introspection tasks. (You then still need some way to pass this information to the 'runtime' world. Either static this, C-style ctors or custom data sections are possibilities. OTOH a mixin can also define members that are accessible from the outside if the module name is known)
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 16:41:56 UTC, hardreset wrote: Moving the reference compiler to LLVM as was suggested in the list. LDC is the only compiler on Fedora/CentOS anyway!
Re: Socket missing option: SO_REUSEPORT
On Wednesday, 21 December 2016 at 13:01:53 UTC, Benjiro wrote: Just check the socket code and there is a small feature missing: enum SocketOption: int { DEBUG =SO_DEBUG,/// Record debugging information BROADCAST =SO_BROADCAST,/// Allow transmission of broadcast messages REUSEADDR =SO_REUSEADDR,/// Allow local reuse of address There needs to be added: REUSEPORT =SO_REUSEPORT,/// Allow local reuse of the port I don't think this needs weeks of discussion ;) SO_REUSEPORT is not supported on Windows.
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now? People that want to use D, want to use the latest and greatest. The reference compiler moves the fastest so they want the reference compiler to be switched to a different backend. Why a FOSS back end is required to use D depends on the person, usually it is political.
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 16:30:15 UTC, bachmeier wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now? Moving the reference compiler to LLVM as was suggested in the list.
Re: Red Hat's issues in considering the D language
On 12/21/2016 11:32 AM, hardreset wrote: On Wednesday, 21 December 2016 at 16:20:31 UTC, Jack Stouffer wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: Is moving to LLVM backend or LDC something that is on the roadmap? Nope. So whats the solution to the "DMD compiler issues" listed? We are working on it, cannot disclose more for the time being. -- Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 16:20:31 UTC, Jack Stouffer wrote: On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: Is moving to LLVM backend or LDC something that is on the roadmap? Nope. So whats the solution to the "DMD compiler issues" listed?
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. Is moving to LLVM backend or LDC something that is on the roadmap? What does it mean to "move" to LDC? Why can't you use LDC now?
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 10:15:26 UTC, hardreset wrote: Is moving to LLVM backend or LDC something that is on the roadmap? Nope.
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 15:46:19 UTC, Gerald wrote: Given that DMD is a non-starter for Linux packages, how feasible is it to simply deprecate GDC and declare LDC as the reference/production compiler for D? DMD could become the experimental/future facing compiler used to evolve D as a language but not meant to be used for production code. This would resolve the non-free aspect of DMD as well as the ABI issue between compilers. These are choices that are made by individual developers. Someone wanting to use one compiler or the other can simply do so.
Re: Improvement in pure functions specification
On Wednesday, 21 December 2016 at 15:40:42 UTC, Andrei Alexandrescu wrote: On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Dropped the void functions. On to the next scandal: A function that accepts only parameters without mutable indirections and returns a result that has mutable indirections is called a $(I pure factory function). An implementation may assume that all mutable memory returned by the call is not referenced by any other part of the program, i.e. it is newly allocated by the function. Andrei Couldn't this be folded into : "The implementation may not remove a call to a pure function if does allocate memory ?" Since there is the concept of weakly pure functions the compiler cannot decide to remove functions on signature alone. Meaning the body has to be available for it to even attempt to elide the call. Therefore specifying implementation behavior based on the function signature is misleading IMO.
Re: Red Hat's issues in considering the D language
On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b I'm the author of Terminix (https://github.com/gnunn1/terminix), a semi-popular terminal emulator for Gnome and Linux. Ximion was the driving force behind getting terminix, ldc and other D related programs packaged for Debian. I'm glad he took the time to write up the issues and share them here. Most of the issues he highlights are relevant for all of the Linux distros so solving them would really help applications written in D gain a wider audience and make it more viable for developers to choose it. Given that DMD is a non-starter for Linux packages, how feasible is it to simply deprecate GDC and declare LDC as the reference/production compiler for D? DMD could become the experimental/future facing compiler used to evolve D as a language but not meant to be used for production code. This would resolve the non-free aspect of DMD as well as the ABI issue between compilers. It should also be noted that Gnome is looking into Rust as well: http://www.phoronix.com/scan.php?page=news_item&px=GNOME-Potential-Rust
Re: Improvement in pure functions specification
On 12/20/2016 05:49 PM, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Dropped the void functions. On to the next scandal: A function that accepts only parameters without mutable indirections and returns a result that has mutable indirections is called a $(I pure factory function). An implementation may assume that all mutable memory returned by the call is not referenced by any other part of the program, i.e. it is newly allocated by the function. Andrei
Re: Red Hat's issues in considering the D language
On 2016-12-21 15:58, Madaz Hill wrote: I'd like to add that the windows version would require another change so that DMD becomes true FOSS. Unless the 32 bit version get dropped away, the standard C library, snn.lib, is even not open-sourced (which is a worst than the backend situation) ! A. The 64bit version uses the Microsoft tool chain, how is that more open source? B. It's possible to use the Microsoft tool chain when compiling for 32bit as well -- /Jacob Carlborg
Re: Improvement in pure functions specification
On 12/21/2016 05:08 AM, Timon Gehr wrote: On 21.12.2016 01:58, Andrei Alexandrescu wrote: On 12/20/16 7:40 PM, Timon Gehr wrote: On 20.12.2016 23:49, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Good, except: "$(P `pure` functions returning `void` will be always called even if it is strongly `pure`. The implementation must assume the function does something outside the confines of the type system and is therefore not allowed to elide the call, even if it appears to have no possible effect.)" I think this makes no sense. What is the idea behind this paragraph? A function that traces execution via a debug statement, for example. -- Andrei IMNSHO, that shouldn't restrict a non-debug build, and it should not be thought of as being outside the type system's confines. Also: - 'void' should not be a special case: either all pure functions can be optimized, or none of them. (a void-returning pure function can be called in a non-void-returning pure function.) - pure functions cannot be elided without considering their bodies in in any case, as they can terminate the program by throwing an error: immutable(void) fail()pure nothrow @safe{ throw new Error("boom!"); } or cause the program to fail to terminate: immutable(void) loop()pure nothrow @safe{ for(;;){} } I'm not a fan of claiming to be cleverer than all future programmers. Allowing pure void(void) functions and making the compiler call them compulsively was my way of saying "hm, this is clever. This is too blatant of an error so I can only assume you wrote this function hoping it will be called. No idea what you do there, but I'm going to call it." Apparently everybody else in this thread is clever enough so I'll eliminate the special case. Andrei
Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback
On 12/20/2016 11:32 PM, Joakim wrote: On Tuesday, 20 December 2016 at 20:51:54 UTC, Andrei Alexandrescu wrote: Thanks for this analysis of the remaining dependency graph, it is worth looking at. Allow me to poke some holes in it. To begin with, the amount of scoping that has been done is overstated, if you simply count scoped imports and compare it to module-level imports. Each module-level import has to be replicated multiple times for each local scope, especially in unittest blocks. A better number is more like 20-30%, as I pointed out 4 out of 13 modules remain at top-level in std.array. Using that metric, a 3-4X reduction in top-level imports has led to at least a 2.2x improvement in imported files, so the effort has been more meaningful than you conclude. Fixed. I also made a note about the need to duplicate imports as they are pushed down. Second, as I noted above, most top-level imports have not been made selective yet, because of the symbol leak bug that was recently fixed by Martin. You will see in my PRs that I only list those symbols as a comment, because I could not turn those into selective imports yet. If the compiler is doing its job right, selective imports should greatly reduce the cost of importing a module, even if your metric would still show the module being imported. That is not relevant to this section, which discusses the effectiveness of using local imports with the current compilation technology. Per the section's opening sentence: A legitimate question to ask is whether consistent use of local imports wherever possible would be an appropriate approximation of the Dependency-Carrying Declarations goal with no change in the language at all. The section "Alternative: Lazy Imports" discusses how static or local imports could be used in conjunction with new compilation technologies. If there are improvements to be made there, please advise. Third, checking some of the output from the commands you ran in your script shows that up to half of the imported modules are from druntime. I noted earlier that Ilya usually didn't bother scoping top-level druntime imports, because he perceived their cost to be low (I scoped them too in the handful I cleaned up, just for completeness). As far as I know, nobody has bothered to spend any time scoping druntime, so it would be better if you filtered out druntime imports from your analysis. Fixed to only count imports from std. Finally, while it's nice to know the extent of the dependency graph, what really matters is the _cost_ of each link of the graph, which is what I keep hammering on. If the cost of links is small, it doesn't matter how entangled it is. If minimizing the dependency graph through scoping alone, ie without implementing this DIP, removes most of the cost, that's all I care about. In first approximation, whether a file gets opened or not makes a difference (filesystem operations (possibly including networking), necessity to rebuild if dependent code is changed. The analysis shows there is significant overhead remaining, on average 10.5 additional files per unused import. If the current document could be clearer in explaining costs, please let me know. I have noted one example above, where _a single DCD in phobos_, ie a scoped, selective import, had gigantic costs in terms of executable size, where entire modules were included because of it. If that's the case more generally, then _no_ amount of dependency disentangling will matter, because the cost of single DCDs is still huge. Perhaps that's just an isolated issue however, it needs to be investigated. That seems an unrelated matter. Yes, you could pull a large dependency in one shot with old or new technology. My point is that the dependency graph matters, but now that we're getting down to the last entanglements, we need to know the cost of those last links. Your dependency analysis gives us some quantitative idea of the size of the remaining graph, but tells us nothing about the cost of those links. That's what I'm looking for. I will spend some time now investigating those costs with sample code. My request all along has been that you give us some idea of those costs, if you know the answer already. I don't know how to make matters much clearer than the current document. Any suggestions are welcome. The section "Workaround: Are Local Imports Good Enough?" discusses the material cost in terms of extra files that need to be opened and parsed (some unnecessarily) in order to complete a compilation. The "Rationale" part of the document discusses the costs in terms of maintainability, clarity, and documentation. Thanks, Andrei
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 13:26:14 UTC, ixid wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b Thanks, Andrei What is the story over the ownership of DMD's backend? I believe Walter's former employer has some stake in it. Has Walter spoken to them about them donating whatever rights they have to the D foundation? You have the answer elements here https://forum.dlang.org/search?q=backend%20symantec&page=1. tl;dr: the backend comes from a commercial C++ compiler that was written by Bright but commercialized by Symantec. This company still owns the rights. I'd like to add that the windows version would require another change so that DMD becomes true FOSS. Unless the 32 bit version get dropped away, the standard C library, snn.lib, is even not open-sourced (which is a worst than the backend situation) !
Re: D future ...
On Wednesday, 21 December 2016 at 14:50:31 UTC, thedeemon wrote: On Wednesday, 21 December 2016 at 11:54:35 UTC, Ilya Yaroshenko wrote: On Wednesday, 21 December 2016 at 11:36:14 UTC, thedeemon wrote: [...] If this is true, a blog post about it with more details is very welcome --Ilya Have you seen this one? http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html Thanks for the link, will read
Re: D future ...
On Wednesday, 21 December 2016 at 11:54:35 UTC, Ilya Yaroshenko wrote: On Wednesday, 21 December 2016 at 11:36:14 UTC, thedeemon wrote: On Tuesday, 20 December 2016 at 10:18:12 UTC, Kelly Sommers wrote: [...] Bad news: without complete redesign of the language and turning into one more C++/CLI (where you have different kinds of pointers in the language for GC and non-GC), having C performance and Go-style low-pause GC is not really possible. You have to choose one. Go chose GC with short pauses but paid with slow speed overall and slow C interop. D chose C-level performance but paid for it with a slow GC. If this is true, a blog post about it with more details is very welcome --Ilya Have you seen this one? http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html
Re: D future ...
On Tuesday, 20 December 2016 at 16:22:43 UTC, Walter Bright wrote: D is quite a bit less formal, but still, if you want action consider that you aren't going to get it with any organization unless you're willing to: 1. pay others to do it 2. convince others that your important issues are more important than everyone else's important issues that they are already working on 3. put some effort into it yourself This includes C, C++, Java, Go, Rust, basically every language in existence. --- Note that pretty much every day in the D forums, people post lists of their most important issues they want other people to work on. And the lists are always different. When people invest time into solving the problems they complain about, that's evidence that those issues are more important. It's the same in C++ land - a common sentiment among the C++ stars is that if someone isn't willing to make an effort to write a proposal to the C++ Committee, it isn't an issue worth their time, either. It really can't be any other way. What about the first way in your list ("pay others to do it")? From what I gather, this was one of the reasons for founding the D Foundation. There are many "boring" tasks that few people seem interested in doing: improving the documentation, maintaining the website, improving the forum system (it lacks many important features IMHO), improving IDE support for D (I have no idea how one would go about doing this but it's important), etc. (The vision document in the D wiki contains many more such "boring" tasks). And the few people that do work on the "boring" stuff seem to be the "wrong" people. One does not need to be a compiler expert or a metaprogramming guru to work on the tasks mentioned. That would be a bad use of that person's time - his/her skills lie elsewhere. If no one is interested in doing this stuff then maybe it's a good idea for the D Foundation to hire some people who'll dedicate their time to these issues. I do not think that this would be a bad use of the foundation's funds.
Re: Red Hat's issues in considering the D language
On Wednesday, 21 December 2016 at 13:18:48 UTC, Andrei Alexandrescu wrote: On 12/20/16 6:08 PM, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b An engineer from Debian wrote down what's needed on the distribution side to give a green light to the D language: https://gist.github.com/ximion/fe6264481319dd94c8308b1ea4e8207a Andrei Thank you for finding this links. The listed issues are very important.
Re: Red Hat's issues in considering the D language
On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b Thanks, Andrei What is the story over the ownership of DMD's backend? I believe Walter's former employer has some stake in it. Has Walter spoken to them about them donating whatever rights they have to the D foundation?
Re: Red Hat's issues in considering the D language
On 12/20/16 6:08 PM, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b An engineer from Debian wrote down what's needed on the distribution side to give a green light to the D language: https://gist.github.com/ximion/fe6264481319dd94c8308b1ea4e8207a Andrei
Re: Socket missing option: SO_REUSEPORT
On Wednesday, 21 December 2016 at 13:01:53 UTC, Benjiro wrote: I don't think this needs weeks of discussion ;) No discussion needed at all. You could simply file an issue here: https://issues.dlang.org/, or submit a PR.
Socket missing option: SO_REUSEPORT
Just check the socket code and there is a small feature missing: enum SocketOption: int { DEBUG =SO_DEBUG,/// Record debugging information BROADCAST =SO_BROADCAST,/// Allow transmission of broadcast messages REUSEADDR =SO_REUSEADDR,/// Allow local reuse of address There needs to be added: REUSEPORT =SO_REUSEPORT,/// Allow local reuse of the port I don't think this needs weeks of discussion ;)
Re: D future ...
On Wednesday, 21 December 2016 at 11:54:35 UTC, Ilya Yaroshenko wrote: On Wednesday, 21 December 2016 at 11:36:14 UTC, thedeemon wrote: On Tuesday, 20 December 2016 at 10:18:12 UTC, Kelly Sommers wrote: [...] Bad news: without complete redesign of the language and turning into one more C++/CLI (where you have different kinds of pointers in the language for GC and non-GC), having C performance and Go-style low-pause GC is not really possible. You have to choose one. Go chose GC with short pauses but paid with slow speed overall and slow C interop. D chose C-level performance but paid for it with a slow GC. If this is true, a blog post about it with more details is very welcome --Ilya You may want to open PR for Mir Blog: https://github.com/libmir/blog
Re: D future ...
On Wednesday, 21 December 2016 at 11:36:14 UTC, thedeemon wrote: On Tuesday, 20 December 2016 at 10:18:12 UTC, Kelly Sommers wrote: [...] Bad news: without complete redesign of the language and turning into one more C++/CLI (where you have different kinds of pointers in the language for GC and non-GC), having C performance and Go-style low-pause GC is not really possible. You have to choose one. Go chose GC with short pauses but paid with slow speed overall and slow C interop. D chose C-level performance but paid for it with a slow GC. If this is true, a blog post about it with more details is very welcome --Ilya
Re: D future ...
On Tuesday, 20 December 2016 at 10:18:12 UTC, Kelly Sommers wrote: What I really want is what C++ wanted to deliver but it doesn't. I want something better than writing C but with the same performance as C and the ability to interface with C without the performance loss and with easily composable libraries. D in my opinion in some ways is close to these goals. It's simpler to understand and write than C++. It has the problem of being a GC'd language however and it's unclear to me if the GC in D is evolving like the Go GC is. ... The things I really want from D to really sway me would be the following (some already exist): 1. Evolve the GC like Go has. 2. No overhead calling C libraries. ... Bad news: without complete redesign of the language and turning into one more C++/CLI (where you have different kinds of pointers in the language for GC and non-GC), having C performance and Go-style low-pause GC is not really possible. You have to choose one. Go chose GC with short pauses but paid with slow speed overall and slow C interop. D chose C-level performance but paid for it with a slow GC.
Re: D future ...
On Wednesday, 21 December 2016 at 09:35:31 UTC, Andrey wrote: On Wednesday, 21 December 2016 at 07:47:08 UTC, O-N-S wrote: On Monday, 19 December 2016 at 23:02:59 UTC, Benjiro wrote: I split this from the "Re: A betterC modular standard library?" topic because my response is will be too much off-topic but the whole thread is irking me the wrong way. I see some of the same argument coming up all the time, with a level of frequency. Five stars of five! Regards, Ozan Read all branche of topic. See ability to make social research. Age'meter))) Andrei,Walter > 45; Benjiro,Dicebot 20-27; others - difficult to say right now; need more time; don't want to upset anyone, just a joke ))) How do you use language is more important than your age. Scientist or AI developer expect highest performance for number-crunching and don't care about fast and easy development/deployment/etc. Devops (like me) don't care about super-performance, but care about easy development (read - libraries availabilty, easy debugging, etc) and deployment, gamedev probably care about GC and I don't know what else. Language core team can choose to care about every or only about some specific categories of developers. And one note on volunteer model of language development - it have obvious pro and contra. how much this "contra" affect language progress depends on the language creators and community goals.
Re: Red Hat's issues in considering the D language
On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. Is moving to LLVM backend or LDC something that is on the roadmap?
Re: Improvement in pure functions specification
On 21.12.2016 01:58, Andrei Alexandrescu wrote: On 12/20/16 7:40 PM, Timon Gehr wrote: On 20.12.2016 23:49, Andrei Alexandrescu wrote: https://github.com/dlang/dlang.org/pull/1528 -- Andrei Good, except: "$(P `pure` functions returning `void` will be always called even if it is strongly `pure`. The implementation must assume the function does something outside the confines of the type system and is therefore not allowed to elide the call, even if it appears to have no possible effect.)" I think this makes no sense. What is the idea behind this paragraph? A function that traces execution via a debug statement, for example. -- Andrei IMNSHO, that shouldn't restrict a non-debug build, and it should not be thought of as being outside the type system's confines. Also: - 'void' should not be a special case: either all pure functions can be optimized, or none of them. (a void-returning pure function can be called in a non-void-returning pure function.) - pure functions cannot be elided without considering their bodies in in any case, as they can terminate the program by throwing an error: immutable(void) fail()pure nothrow @safe{ throw new Error("boom!"); } or cause the program to fail to terminate: immutable(void) loop()pure nothrow @safe{ for(;;){} }
Re: D future ...
On Wednesday, 21 December 2016 at 07:47:08 UTC, O-N-S wrote: On Monday, 19 December 2016 at 23:02:59 UTC, Benjiro wrote: I split this from the "Re: A betterC modular standard library?" topic because my response is will be too much off-topic but the whole thread is irking me the wrong way. I see some of the same argument coming up all the time, with a level of frequency. Five stars of five! Regards, Ozan Read all branche of topic. See ability to make social research. Age'meter))) Andrei,Walter > 45; Benjiro,Dicebot 20-27; others - difficult to say right now; need more time; don't want to upset anyone, just a joke )))
Re: Red Hat's issues in considering the D language
On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: Hello, a few engineers at Red Hat are taking a look at using the D language on the desktop and have reached out to us. They have created a list of issues. We are on the top-level ones, and of course would appreciate any community help as well. https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b Thanks, Andrei The assert/unittest issues can be solved with a library: https://github.com/nomad-software/dunit
Spotted on twitter: Packt blog post about some compile-time features of D
https://www.packtpub.com/books/content/modelling-rpg-d/ Maybe worth linking on reddit etc.?
About useful assert error information
(Just picking out a random item from the list at [1] and showing the current progress and giving a good point on where to start. Maybe we can do this with the entire list?) The assert statement is "dumb" in a way that it doesn't show me what data it actually compared, making using pure-assert in unittests a very cumbersome task. Pretty much every slightly bigger project I have see wraps assert in some other function to display useful information about the specific check. I would consider a useful-by-default assert statement to be very valuable, especially because it could show much more and nicer information and we could all drop our workarounds. There has been a lot of work on this, e.g. https://issues.dlang.org/show_bug.cgi?id=5547 http://wiki.dlang.org/DIP83 https://github.com/dlang/dmd/pull/5189 and for the record a DMD PR (https://github.com/dlang/dmd/pull/263) was rejected, because: this is more properly the domain of a library template However, the library PR got rejected, because it it should better be done in the compiler: https://github.com/dlang/phobos/pull/4323 I think the best solution is to push DIP83 to the new DIP process and thus convince Walter that useful asserts should be part of the compiler. [1] http://forum.dlang.org/post/o3cdl9$ecs$1...@digitalmars.com