Re: Rather D1 then D2
On Monday, 24 September 2018 at 09:19:34 UTC, Chris wrote: On Sunday, 23 September 2018 at 02:05:42 UTC, Jonathan M Davis wrote: With regards to D1 users who are unhappy with D2, I think that it makes some sense to point out that a subset of D2 can be used in a way that's a lot like D1, but ultimately, if someone doesn't like the direction that D2 took, they're probably better off finding a language that better fits whatever it is that they're looking for in a language. Trying to convince someone to use a language that they don't like is likely to just make them unhappy. - Jonathan M Davis Maybe it's time for D3. Pick and choose the things that work and really do make sense and discard others that don't add much value but only bring trouble. I think D2 is a nice collection of Lego bricks by now that could be used to build something truly great. I don't agree; the time for D3 is still many years away. Transitiong to D3 now would be suboptimal, as the D2 language is still being "explored". The D community as a whole needs more experience with the language to understand pain points, so we can make more informed changes in D3 (if D3 ever happens; it may not need to).
Re: John Regehr on "Use of Assertions"
On Monday, 10 September 2018 at 20:25:21 UTC, Jonathan M Davis wrote: I propose: - 'assume': aborts on false condition in debug builds, not checked in release builds, used as optimizer hint; - 'insist': aborts on false condition in debug builds, aborts on false condition in release builds, used as optimizer hint; - 'uphold': aborts on false condition in debug builds, aborts on false condition in release builds, NOT used as optimizer hint; - 'allege': logs error and aborts on false condition in debug builds, logs error and continues on false condition in release builds, NOT used as optimizer hint; Honestly, that seems like total overkill I'm pretty sure that was sarcasm on H. S. Teoh's part. Of course, I can't tell for sure due to Poe's Law.
Re: Small @nogc experience report
On Friday, 7 September 2018 at 17:35:12 UTC, Eugene Wissner wrote: On Friday, 7 September 2018 at 17:01:09 UTC, Meta wrote: Semi-unrelated, but I think you should open a bug for this one. I remember Andrei stating before that every function in std.algorithm except for LevehnsteinDistance(?) is @nogc, so he either missed topNCopy or the gc-ness of the function has changed sometime between ~2015 and now. It was never true. Here is another example: import std.algorithm; void main() @nogc { int[4] a, b; fill(a[], b[]); } The funny thing is that fill() doesn't always allocate, but only if the element to fill with, is an array. fill() uses enforce() which allocates and throws. Other algorithms (e.g. equal) have special handling for character arrays and throw if they get wrong unicode or use some auto-decoding functions that aren't @nogc. I'm sure I've heard Andrei mention it multiple times, but I must be misremembering something.
Re: Small @nogc experience report
On Friday, 7 September 2018 at 16:44:05 UTC, Peter Alexander wrote: I recently wrote a small program of ~600 lines of code to solve an optimisation puzzle. Profiling showed that GC allocations were using non-trivial CPU, so I decided to try and apply @nogc to remove allocations. This is a small experience report of my efforts. 1. My program does some initialisation before the main solver. I don't care about allocations in the initialisation. Since not all of my code needed to be @nogc, I couldn't add `@nogc:` to the top of the file and instead had to refactor my code into initialisation parts and main loop parts and wrap the latter in @nogc { ... }. This wasn't a major issue, but inconvenient. 2. For my code the errors were quite good. I was immediately able to see where GC allocations were occurring and fix them. 3. It was really frustrating that I had to make the compiler happy before I was able to run anything again. Due to point #1 I had to move code around to restructure things and wanted to make sure everything continued working before all GC allocations were removed. 4. I used std.algorithm.topNCopy, which is not @nogc. The error just says "cannot call non-@nogc function [...]". I know there are efforts to make Phobos more @nogc friendly, but seeing this error is like hitting a brick wall. I wouldn't expect topNCopy to use GC, but as a user, what do I do with the error? Having to dig into Phobos source is unpleasant. Should I file a bug? What if it is intentionally not @nogc for some subtle reason? Do I rewrite topNCopy? Semi-unrelated, but I think you should open a bug for this one. I remember Andrei stating before that every function in std.algorithm except for LevehnsteinDistance(?) is @nogc, so he either missed topNCopy or the gc-ness of the function has changed sometime between ~2015 and now. Actually, thanks to the fact that run.dlang.io provides the ability to compile a code snippet with all compilers since 2.060, this is very easy to debug: import std.algorithm; import std.range; void main() { int[100] store; auto nums = iota(100); nums.topNCopy(store[]); //compiles } Now if I add @nogc to main: Up to 2.060 : Failure with output: onlineapp.d(4): Error: valid attribute identifiers are @property, @safe, @trusted, @system, @disable not @nogc 2.061 to 2.065.0: Failure with output: onlineapp.d(4): Error: undefined identifier nogc 2.066.0: Failure with output: onlineapp.d(8): Error: @nogc function 'D main' cannot call non-@nogc function 'std.algorithm.topNCopy!("a < b", Result, int[]).topNCopy' 2.067.1 to 2.078.1: Failure with output: onlineapp.d(8): Error: @nogc function 'D main' cannot call non-@nogc function 'std.algorithm.sorting.topNCopy!("a < b", Result, int[]).topNCopy' Since 2.079.1: Failure with output: onlineapp.d(8): Error: `@nogc` function `D main` cannot call non-@nogc function `std.algorithm.sorting.topNCopy!("a < b", Result, int[]).topNCopy` So it seems that it's never worked. Looking at the implementation, it uses a std.container.BinaryHeap, so it'd require a small rewrite to work with @nogc. 5. Sometimes I wanted to add writeln to my code to debug things, but writeln is not @nogc, so I could not. I could have used printf in hindsight, but was too frustrated to continue. You are allowed to call "@gc" functions inside @nogc functions if you prefix them with a debug statement, e.g.: void main() @nogc { debug topNCopy(source, target); } You then have to pass the appropriate switch to the compiler to tell it to compile in the debug code. 6. In general, peppering my code with @nogc annotations was just unpleasant. 7. In the end I just gave up and used -vgc flag, which worked great. I had to ignore allocations from initialisation, but that was easy. It might be nice to have some sort of `ReportGC` RAII struct to scope when -vgc reports the GC. I've been thinking lately that @nogc may have been going to far, and -vgc was all that was actually needed. -vgc gives you the freedom to remove or ignore GC allocations as necessary, instead of @nogc's all or nothing approach.
Re: Messing with betterC and string type.
On Friday, 7 September 2018 at 15:57:19 UTC, Adam D. Ruppe wrote: On Friday, 7 September 2018 at 15:48:39 UTC, SrMordred wrote: Yes, but you don't really need this function. Whoa, when was that added?! I don't remember ctors via ufcs being there but indeed, it works on newest dmd. I'm pretty sure it's worked for a long time. I remember Bearophile asking for it a few years ago, and I'm pretty sure it was Kenji that implemented it.
Re: What changes to D would you like to pay for?
On Wednesday, 5 September 2018 at 07:32:54 UTC, Simen Kjærås wrote: On Wednesday, 5 September 2018 at 07:00:49 UTC, Joakim wrote: Please answer these two questions if you're using or would like to use D, I have supplied my own answers as an example: 1. What D initiatives would you like to fund and how much money would you stake on each? (Nobody is going to hold you to your numbers, but please be realistic.) I'll throw $200 at issue 5710. It's already got $200 on bountysource, and it's the one issue I consistently bump into. And yes, you can hold me to this. 2. Would you be okay with the patches you fund not being open-sourced for a limited time, with the time limit or funding threshold for open source release specified ahead of time, to ensure that funding targets are hit? Sure, as long as they're eventually open-sourced. -- Simen I'll pledge another $100 USD to that one. Ditto on the open source question.
Re: John Regehr on "Use of Assertions"
On Wednesday, 5 September 2018 at 10:30:46 UTC, Ola Fosheim Grøstad wrote: On Monday, 3 September 2018 at 16:53:35 UTC, Meta wrote: This battle has been fought over and over, with no movement on either side, so I'll just comment that nobody what John Nails or anyone else says, my personal opinion is that you're 100% wrong on that point :-) Well, John Regehr seems to argue that you shouldn't use asserts for optimization even if they are turned on as the runtime might override a failed assert. «As developers, we might want to count on a certain kind of behavior when an assertion fails. For example, Linux’s BUG_ON() is defined to trigger a kernel panic. If we weaken Linux’s behavior, for example by logging an error message and continuing to execute, we could easily end up adding exploitable vulnerabilities.» So… I don't disagree. I think the only sane way to use asserts as an optimization guide is when the program will abort if the condition does not hold. That, to me, makes perfect sense, since you're basically telling the compiler "This condition must be true past this assertion point, because otherwise program execution will not continue past this point". You're ensuring that the condition specified in the assert is true by definition. Not having that hard guarantee but still using asserts as an optimization guide is absolutely insane, IMO.
Re: Static foreach bug?
On Monday, 3 September 2018 at 18:03:18 UTC, Soma wrote: Sorry to disrupt your threat, but as a lurking in this forum using D for small projects, and after looking such snippet my first impression is how D is getting polluted and becoming more like Java and C++. "final class", "public final this", "super"... I agree with you that D has more than a few function attributes and it gets confusing, but I'd like to point out that "final", "public", "super", etc. have been in D since the first version of D1, if I'm not mistaken.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 14:26:46 UTC, Laeeth Isharc wrote: I just spoke with Dicebot about work stuff. He incidentally mentioned what I said before based on my impressions. The people doing work with a language have better things to do than spend a lot of time on forums. And I think in open source you earn the right to be listened to by doing work of some kind. He said (which I knew already) it was an old post he didn't put up in the end - somebody discovered it in his repo. He is working fulltime as a consultant with me for Symmetry and is writing D as part of that role. I don't think that indicates he didn't mean his criticisms, and maybe one could learn from those. But a whole thread triggered by this is quite entertaining. Interesting, I did not realize that he had left Sociomantic. Even if he did not release the article, I think it's a good idea that we take some of his criticisms to heart. I, at the very least, agree with at least a few of them, and as we've seen, so do others.
Re: John Regehr on "Use of Assertions"
On Saturday, 1 September 2018 at 20:15:15 UTC, Walter Bright wrote: https://blog.regehr.org/archives/1091 As usual, John nails it in a particularly well-written essay. "ASSERT(expr) Asserts that an expression is true. The expression may or may not be evaluated. If the expression is true, execution continues normally. If the expression is false, what happens is undefined." Note the "may or may not be evaluated." We've debated this here before. I'm rather pleased that John agrees with me on this. I.e. the optimizer can assume the expression is true and use that information to generate better code, even if the assert code generation is turned off. I used to completely agree with your position about asserts being used for optimization purposes, until I realized that part of your position was for asserts to be used as optimization hints *even if they aren't checked*. This battle has been fought over and over, with no movement on either side, so I'll just comment that nobody what John Nails or anyone else says, my personal opinion is that you're 100% wrong on that point :-)
Re: [OT] "I like writing in D" - Hans Zimmer
On Sunday, 26 August 2018 at 11:46:17 UTC, Olivier Pisano wrote: On Wednesday, 22 August 2018 at 22:51:58 UTC, Piotrek wrote: You may already know that from youtube. It seems D starts getting traction even among musicians: https://www.youtube.com/watch?v=yCX1Ze3OcKo&feature=youtu.be&t=64 That really put a smile on my face :D And it would be a nice example of a D advertising campaign ;) Cheers, Piotrek Moreover, D is written using two sharp signs, which gives me ideas. D = C##
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Friday, 24 August 2018 at 21:53:18 UTC, H. S. Teoh wrote: I think it's clear by now that most of D's woes are not really technical in nature, but managerial. Agreed. I'm not sure how to improve this situation, since I'm no manager type either. Money is the only feasible solution IMO. How many people posting on this forum would quit their job tomorrow and solely contribute to OSS and/or work on their own projects if money wasn't an issue? The majority of people don't like being told what to do, and only want to work on what they're interested in. The only way to get them to work on something they're not interested in is to pay them.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Friday, 24 August 2018 at 21:43:45 UTC, Steven Schveighoffer wrote: According to this comment: https://github.com/dlang/phobos/pull/5291#issuecomment-360929553 There was no way to get a deprecation to work. When we can't get a deprecation to work, we face a hard decision -- actually break code right away, print lots of crappy errors, or just leave the bug unfixed. -Steve Ah, that's unfortunate. Damned if you do, damned if you don't. I still don't agree with making a breaking change to Phobos.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Friday, 24 August 2018 at 17:12:53 UTC, H. S. Teoh wrote: I got bitten by this just yesterday. Update dmd git master, update vibe.d git master, now my vibe.d project doesn't compile anymore due to some silly string.d error somewhere in one of vibe.d's dependencies. :-/ While we're airing grievances about code breakages, I hit this little gem the other day, and it annoyed me enough to complain about it: https://github.com/dlang/phobos/pull/5291#issuecomment-414196174 What really gets me is the actual removal of the symbol. If it had been left there with a deprecation message, I would've caught the problem immediately at the source and fixed it in a few minutes. Instead, I spent an hour or so tracing "execution" paths through a codebase that I'm unfamiliar with to figure out why a static if branch is no longer being taken. On the topic of this thread... I was a bit confused with Dicebot's decision to leave at the time, because he seemed to like dip1000 but then later had a heel turn and left. Looking back through newsgroup threads, it seems like it was mostly that he disagreed with the project management side of things (which he also brings up in his article); an incomplete version of the feature being merged with code in the main branch having to be adjusted to support it. People have complained about it before, and it's distressingly common in D. Why it's not done in a feature branch and then merged in, I don't know, but I do agree with his objections.
Re: Whence came UFCS?
On Friday, 27 July 2018 at 03:41:29 UTC, Sameer Pradhan wrote: During our Boston D Meetup today, we went through and deconstructed Walter's wonderfully elegant blog post from 2012 called "Component Programming in D" http://www.drdobbs.com/article/print?articleId=240008321&siteSectionName=architecture-and-design I stumbled upon this gem (and another score or so articles on the digital mars site) a few days back, while following various hyperlinks, within and around the blog on Walter's take on what was C's biggest mistake which has been getting a lot of comments over the past few days. This post which I have been trying to digest bit-by-bit over the past few days, made me realize why I fell in love with D in the first place. To top it all, Steven played a lightening talk from 2018, called "values as types" by Andreas (https://youtu.be/Odj_5_pDN-U?t=21m10s) which is a parody on C++ and is so utterly ludicrous, we could not stop laughing. Anyway, back to the point. During the same period, but independent of the thread on C's mistake, I found that Kotlin has something called "Extension Functions" and "Extension Properties" (https://kotlinlang.org/docs/reference/extensions.html) via. the following article on Medium https://medium.com/@magnus.chatt/why-you-should-totally-switch-to-kotlin-c7bbde9e10d5 Specifically Item #14. What I saw/read seemed eerily familiar. Almost like a "wolf in sheeps clothing". The example screamed of UFCS. Then, later, while reading the above Kotlin documentation, I saw a reference being made to similar functionality in C# and Gosu (apparently a programming language that I have never heard of before today) called Extensions. Furthermore, I found that Project Lombok (https://projectlombok.org/features/experimental/ExtensionMethod) has tried to make this idiom/functionality available in Java through a @ExtensionMethod annotation. This also almost exactly represent UFCS functionality, though much more cludgy, I must say. Therefore, after reading the word "Extension" in three different contexts, I started wondering and various questions came to mind, starting with---Whence came UFCS? a. Did Walter and/or Andrei invent it independently of C#? b. Was it called UFCS in some other language? c. Were they not aware of Extensions when they coined UFCS? d. Are UFCS and Extensions really one and the same thing? e. If not, what is/are the difference(s)? And is this why a different term/acronym was coined? As far as I can tell a Google search on UFCS leads to only material on D and a Wikipedia entry mentioning Stroustrup and Sutter's proposal from 2016 to extend C++ to have this facility. It is likely that the Wikipedia article is severely incomplete in its historical connections as there is no mention of C# or that of Walter's 2012 Dr. Dobbs post which already lists it towards the end---in the list of features that D has which allows the creation of elegant components---among other dozen or so features. In the end I thought I might as well dump my thoughts on the D forum and hear straight from the horse's (or horses') mouth(s)---so to speak. -- Sameer I remember reading an article that Andrei wrote a long time ago that cited an older language as the inspiration for UFCS, but it was probably from around 2007 where UFCS for arrays was already well-established (I never knew that it was actually a bug at one point). I don't remember what that language was unfortunately.
Re: Copy Constructor DIP
On Friday, 13 July 2018 at 21:49:57 UTC, rikki cattermole wrote: On 14/07/2018 9:28 AM, Manu wrote: I've already contributed other points to this DIP in the way you describe. This one however is very strange, and I'm surprised you can find the hand-wavy introduction of a new attribute without any sense of where it's going to be okay. Or maybe, I'm surprised I'm the only one that finds problem with that. You are very much not alone. I didn't articulate it very clearly, but I am super not happy with such a new attribute. I'm not crazy about it either, but it may be a necessary evil to avoid breaking code (although it doesn't entirely avoid that, as has been demonstrated).
Re: Guido van Rossum has resigned
On Thursday, 12 July 2018 at 21:16:02 UTC, Walter Bright wrote: as Python's BDFL. https://mail.python.org/pipermail/python-committers/2018-July/005664.html I looked up PEP 572 and... *this* is what people are up in arms about? Assignment in expressions, which works fine the majority of the time in the many languages that allow it. The benefits far outweigh the drawbacks IMO.
Re: Copy Constructor DIP
On Friday, 13 July 2018 at 02:32:59 UTC, Manu wrote: Seriously, if I was making this proposal to you, and you were in my position... there is no way in hell that you'd allow any of us to slip something so substantial by like that with the wave of a hand. This DIP depends on @implicit. How can you argue otherwise? Nothing is being slipped by as far as I'm concerned. @implicit is solely introduced in the DIP as a marker for the copy constructor, and it doesn't seem like it's intended for anything further than avoiding breaking code. It feels to me like you're making a mountain out of an ant hill. Still, regardless of what the intention was, @implicit was a poor choice of words for exactly this reason. The DIP itself seems solid. Makes me a little nervous to be introducing copy constructors, but if it's really that untenable to typecheck qualified postblits, then I'm all for it. One step closer to eliminating Qualified Hell when wrapping structs in other structs.
Re: D community's view on syntactic sugar
On Saturday, 30 June 2018 at 09:34:30 UTC, Dmitry Olshansky wrote: Okay. I see that you see. I “meet” a guy, you know I admire that he _somehow_ made an HTTP Application server that is faster then let’s pick good stuff... Nginx Plus with tuning by Igor himeself (before sweat too mich like Igor at least twice did - no shit these discussion, you THAT guy he rans his... GWAN? as if it was _just_ an spplication server. You know not a word more but sombody who say: - knows infosecurity and likes that stuff that say slices into the kernel - rootkit for instance - aand say injects something into the kernel You know where it goes and yet that guy obviously good and the humor Boy wicked humor, I kid you not I “invented” this trick this morning AND *solved* the _mystery_ of GWAN. Fuck, it is at _very_ least a brilliant hoax, and boy he is good... He if that’s the real name I mean what if this is Linus (oh, my and you know GWAN started _as_ _windows_ “app server” ...) If you don’t laught now... I don’t now.. At the *very* least don’t _ever_ go to DConf... What the fuck with brain like you simply will not like it P.S. Atilla even if that is “alternative way” to *do* it... Yeah. Let’s say you owe... a bear and an a few hours to dicsuss it, okay? ;) I have no idea what I just read, but I'll have 10 of what you're having.
Re: tuple of delegates requires explicit type
On Wednesday, 20 June 2018 at 11:43:52 UTC, DigitalDesigns wrote: alias f = void delegate(); Tuple!(f)[] fs; fs ~= tuple(() { }); fails but fs ~= Tuple!(f)(() { }); passes. in tuple, it is seeing the lambda as void and thinks I'm trying to append a tuple of void. I don't see why the compiler can't see that it works. It's because if you don't specify any types, the compiler creates a template function so type inference for the arguments/return type can take place. It could probably be inferred without making the literal a template, but that's just how it's done currently.
Re: D community's view on syntactic sugar
On Saturday, 16 June 2018 at 18:49:43 UTC, Steven Schveighoffer wrote: On 6/15/18 8:53 PM, Seb wrote: On Saturday, 16 June 2018 at 00:32:24 UTC, H. S. Teoh wrote: On Sat, Jun 16, 2018 at 12:20:35AM +, Seb via Digitalmars-d wrote: On Friday, 15 June 2018 at 23:04:40 UTC, Sjoerd Nijboer wrote: > For someone coming from a C# background there is some > > seemingly simple syntactic sugar missing from D. > > * The null conditional operator `?.` e.g. SafeAccess https://github.com/BBasile/iz/blob/7336525992cb178ead83a7893a5a54597d840441/import/iz/sugar.d#L1551 Didn't Andrei propose an Elvis operator some time ago? Whatever became of that DIP? T https://forum.dlang.org/post/ot1q8b$23pt$1...@digitalmars.com https://github.com/dlang/dmd/pull/7242 I think Razvan focused on other projects no one else bothered enough to write a DIP about this. I don't know if this is the same thing. I'm not a (recent) C# developer, but in Swift, ?. means something completely different from ?: in Andrei's proposal. -Steve Yes, Andrei's proposal was for the equivalent of ?? in C#. int* p = null; int n = p ?: 42; assert(n == 42);
Re: import std.traits. std.string;
On Saturday, 16 June 2018 at 00:24:42 UTC, DigitalDesigns wrote: space is ignored! Seems like a bug std . traits . std . string is valid? Like most C-family languages, D is a freeform language[1]. Funnily enough, I don't think this is explicitly stated in the D spec (at least not that I could find). It's just assumed, because D is an evolution of C, C++, and Java primarily, all of which are freeform languages. There are a couple places where whitespace is significant in D, but for the most part, you can write your code in whatever way you want as long as it's syntactically valid according to the D grammar. 1. https://www.pcmag.com/encyclopedia/term/43487/free-form-language
Re: D community's view on syntactic sugar
On Saturday, 16 June 2018 at 05:48:26 UTC, Nick Sabalausky (Abscissa) wrote: Everyone here is probably going to be different (D programmers are a varied bunch), but for me, I absolutely love syntax sugar. And I very much miss the earlier days when D was all about the sugar. Funny that you mention that. Back 6 years ago when I was looking for a C++ alternative that wasn't Java, and came across D (I think I saw it mentioned on Bartosz Milewski's site somewhere), there were exactly three things that took me from "okay, this is kind of cool" to "man, this looks awesome. I need to jump in and start learning this language right away": Array slicing (no brainer) Array operations (like a[] *= 3) Built-in complex numbers At the time though, I thought it was so cool that there was a language that had syntactic sugar like this (I was an undergrad student at the time, and I was getting really tired of having to write out for-loops in Java - never mind that I think Java had introduced for-each syntax at this point). The fact that for (int i = 0; i < array.length; i++) { array[i] *= 3; } Could be replaced with a one-liner in D just blew my mind. Same for complex numbers, I guess. I don't generally care about complex numbers and rarely use them, if ever, but I just thought it was so cool that having them built-in made the code so straightforward and beautiful. * a good syntax for properties so there's less code bloat. That bugged me for awhile, too, due to my earlier C# experience... I wrote a lot of C# a few years ago working on a big project for my then-employer, and I missed a lot of D features. However, the one thing I think I miss the most in D is C#'s getter/setter syntax. I hate using properties in D, because they are not transparently substitutable with bare members (even when we assume no static introspection), and because it requires two separate declarations to define both a setter and a getter function. If only D hadn't given up on the AST macros idea, we could actually implement most of this sugar AS A LIBRARY SOLUTION, just like modern D wants. I do wish D had some sort of procedural macro system a la what they're trying to do in Rust. The template systems C++ and D have now are just clumsier macro systems with a fraction of the power and double the complexity. At this point, it's a matter of taste, and Walter and Andrei's taste differs from mine. The only way I can truly get what I want from the D language is to fork it, which I don't have the time or knowledge to properly maintain. Maybe once I retire and get fed up with D, like Walter with C++, and thus the cycle begins anew ;-)
Re: stride in slices
On Monday, 4 June 2018 at 23:08:17 UTC, Ethan wrote: On Monday, 4 June 2018 at 18:11:47 UTC, Steven Schveighoffer wrote: BTW, do you have cross-module inlining on? Just to drive this point home. https://run.dlang.io/is/nrdzb0 Manually implemented stride and fill with everything forced inline. Otherwise, the original code is unchanged. 17 ms, 891 μs, and 6 hnsecs 15 ms, 694 μs, and 1 hnsec 15 ms, 570 μs, and 9 hnsecs My new stride outperformed std.range stride, and the manual for-loop. And, because the third test uses the new stride, it also benefited. But interestingly runs every so slightly faster... Just as an aside: ... pragma( inline ) @property length() const { return range.length / strideCount; } pragma( inline ) @property empty() const { return currFront > range.length; } pragma( inline ) @property ref Elem front() { return range[ currFront ]; } pragma( inline ) void popFront() { currFront += strideCount; } ... pragma( inline ) auto stride( Range )( Range r, int a ) ... pragma( inline ) auto fill( Range, Value )( Range r, Value v ) ... pragma(inline), without any argument, does not force inlining. It actually does nothing; it just specifies that the "implementation's default behaviour" should be used. You have to annotate with pragma(inline, true) to force inlining (https://dlang.org/spec/pragma.html#inline). When I change all the pragma(inline) to pragma(inline, true), there is a non-trivial speedup: 14 ms, 517 μs, and 9 hnsecs 13 ms, 110 μs, and 1 hnsec 13 ms, 199 μs, and 9 hnsecs There's further reductions using ldc-beta: 14 ms, 520 μs, and 4 hnsecs 13 ms, 87 μs, and 2 hnsecs 12 ms, 938 μs, and 8 hnsecs
Re: stride in slices
On Saturday, 2 June 2018 at 18:49:51 UTC, DigitalDesigns wrote: Proposal: [a..b;m] m is the stride, if ; is not a good char then |, :, !, or # could be good chars. This is exactly what std.range.stride does. The syntax [a..b;m] directly translates to [a..b].stride(m).
Re: Ideas for students' summer projects
All of your suggestions are good ideas, Mike, but they're way too big for an 8-10 week student project. We need something smaller, like making key Phobos functions @nogc/@safe/pure/nothrow/etc.
Re: Why is 64-bit dmd not built as part of the Windows release?
On Tuesday, 15 May 2018 at 16:01:28 UTC, Atila Neves wrote: I don't know why even bother with 32-bit dmd to begin with, but at least there should be an option. I just spent 45min trying to build 64-bit dmd on Windows. It wasn't fun. "Isn't it just make -f win64.mak?", I hear you ask. Yes. If you want a version with debug messages turned on. It took me 45min to learn that disabling those is... non-trivial. As it turns out, trying to build dmd yourself from the released tag and replacing the .exe from the installer by the one you created works, unless: 1. You remove -debug 2. You add -O If you do #1 or #2, then the produced dmd.exe doesn't work. At all. 32 *or* 64 bits. And this is something you need to edit the makefile for, trying to do that from the command line was an exercise in futility. "How does the installer-built version work then?", I again hear you ask. No idea. Debug 64-bit dmd it is! I *would* try and add a 64-bit dmd to the installer, but apparently to build the Windows installer you need a special Windows box commisioned by the Vatican and blessed by the Pope himself. Atila I haven't tried 64-bit builds in awhile, but every time I try to build on Windows I run into some new issue that I have to work through. Just recently there was a check added to the build process which fails if there are Windows line endings in any source files, which IMO is insane since you're just building, not committing.
Re: Extend the call site default argument expansion mechanism?
On Tuesday, 15 May 2018 at 14:52:46 UTC, Steven Schveighoffer wrote: Sadly with(WithAlloc!alloc) doesn't work. (If you have to use withAlloc.func everywhere, it kind of destroy the point, doesn't it?) It seems opDispatch isn't being used in the with statement. That seems like a bug, or maybe a limitation. I'm not sure how "with" works, but I assumed it would try calling as a member, and then if it doesn't work, try the call normally. Probably it's checking to see if it has that member first. Annoying... -Steve Yeah I tried it with opDispatch but it didn't work. I vaguely remember some changes being made to how lookup is done in the past year or so... but I can't find the PR in question.
Re: Extend the call site default argument expansion mechanism?
On Friday, 11 May 2018 at 15:03:41 UTC, Uknown wrote: I see what you're saying and I agree with you. I think a better way would be to try and extend the `with` syntax to work with arbitrary functions, rather than only objects. That would make it more useful. So something like: --- void f1(allocator alloc, ...){} void f2(allocator alloc, ...){} ... void fn(allocator alloc, ...){} void main() { with(MyAllocator) { f1(...); f2(...); ... fn(...); } } --- It's not as pretty, and I don't know if it works outside this toy example yet, but you can do: import std.stdio; struct Allocator { auto call(alias F, Args...)(Args args) { return F(this, args); } void deallocateAll() { writeln("deallocateAll"); } } void f1(Allocator a, int n) { writeln("f1"); } void f2(Allocator, string s, double d) { writeln("f2"); } void main() { with (Allocator()) { scope(exit) deallocateAll; call!f1(2); call!f2("asdf", 1.0); } }
Re: Extend the call site default argument expansion mechanism?
On Friday, 11 May 2018 at 11:42:07 UTC, Dukc wrote: On Thursday, 10 May 2018 at 14:15:18 UTC, Yuxuan Shui wrote: ... // constructor of DataStructure this(Allocator alloc=__ALLOC__) {...} ... auto alloc = new SomeAllocator(); define __ALLOC__ = alloc; // And we don't need to pass alloc everytime ... Is this a good idea? Doesn't this basically mean including the implicits Martin Odersky talked about at Dconf in D? Yes it does. I was thinking the exact same thing while watching his talk; implicits would be perfect for allocators.
Re: Bugzilla & PR sprint on the first weekend of every month
On Tuesday, 8 May 2018 at 18:48:15 UTC, Seb wrote: What do you guys think about having a dedicated "Bugzilla & PR sprint" at the first weekend of very month? We could organize this a bit by posting the currently "hot" bugs a few days ahead and also make sure that there are plenty of "bootcamp" bugs, s.t. even newcomers can start to get involved. Even if you aren't too much interested in this effort, being a bit more active on Slack/IRC or responsive on GitHub on this weekend would help, s.t. newcomers interested in squashing D bugs get over the initial hurdles pretty quickly and we can finally resolve the long-stalled PRs and find a consensus on them. What do you think? Is this something worth trying? Maybe the DLF could also step in and provide small goodies for all bug hunters of the weekend (e.g. a "D bug hunter" shirt if you got more than X PRs merged). I like this idea, but for reviewing/merging items as IMO that needs more focus (of course, we could combine the two). Again, just my opinion, but a crucial part is coordination between regular committers to get stuff reviewed and merged fast (especially for new contributors participating with PRs). Being on Slack/IRC and being responsive is of course the big one there, as you mentioned. Maybe also creating gitter chats for dmd/druntime/phobos so new contributors have a single point of contact? Due to a sharply increased workload at my day job I can't be available at most times for such a sprint, but I would participate over a weekend. If I know further in advance, I can commit to being active and available over the whole of the weekend. For this upcoming weekend, probably only Saturday and maybe Sunday night. Whether anyone else wants to throw their hat in or not, I'll make sure to be online for Saturday at least, and dedicate a good portion of the day to reviewing PRs and trying to push them through.
Re: Found on proggit: Krug, a new experimental programming language, compiler written in D
On Monday, 30 April 2018 at 16:20:38 UTC, H. S. Teoh wrote: As a native Chinese speaker, I find contortions of this kind mildly amusing but mostly ridiculous, because this is absolutely NOT how the language works. It is carrying an ancient scribal ivory-tower ideal of one syllable per word to ludicrous extremes, an ideal that's mostly unattained, because most so-called monosyllabic "words" in the language are in fact multi-consonantal clusters retroactively analysed as monosyllables. Isolated syllables taken out of their context have no real meaning of their own (except perhaps in writing, which again is an invention of the scribes that doesn't fully reflect the spoken reality [*]). Actually pronouncing the atrocity above might as well be speaking reverse-encrypted Klingon as far as comprehensibility by a native speaker is concerned. Oh yes, I'm well aware that there's a lot of semantic contortion required here, and that as spoken, this sounds like complete gibberish. I don't know where the monosyllable meme came from, either; it's readily apparently from learning even basic vocabulary. 今天, 马上, 故事, hell, 中国 is a compound word.
Re: Found on proggit: Krug, a new experimental programming language, compiler written in D
On Thursday, 26 April 2018 at 23:26:30 UTC, Walter Bright wrote: Besides, redundancy can make a program easier to read (English has a lot of it, and is hence easy to read). I completely agree. I always make an effort to make my sentences as redundant as possible such that they can be easily read and understood by anyone: Buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo. Unfortunately, I think the Chinese have us beat; they can construct redundant sentences far beyond anything we could ever imagine, and thus I predict that within 50 years Chinese will be the new international language of science, commerce, and politics: Shíshì shīshì Shī Shì, shì shī, shì shí shí shī. Shì shíshí shì shì shì shī. Shí shí, shì shí shī shì shì. Shì shí, shì Shī Shì shì shì. Shì shì shì shí shī, shì shǐ shì, shǐ shì shí shī shìshì. Shì shí shì shí shī shī, shì shíshì. Shíshì shī, Shì shǐ shì shì shíshì. Shíshì shì, Shì shǐ shì shí shì shí shī. Shí shí, shǐ shí shì shí shī shī, shí shí shí shī shī. Shì shì shì shì. 石室诗士施氏,嗜狮,誓食十狮。氏时时适市视狮。十时,适十狮适市。 是时,适施氏适市。氏视是十狮,恃矢势,使是十狮逝世。氏拾是十狮尸,适石室。石室湿,氏使侍拭石室。石室拭,氏始试食是十狮尸。食时,始识是十狮,实十石狮尸。试释是事。
Re: Found on proggit: Krug, a new experimental programming language, compiler written in D
On Thursday, 26 April 2018 at 15:07:37 UTC, H. S. Teoh wrote: On Thu, Apr 26, 2018 at 08:50:27AM +, Joakim via Digitalmars-d wrote: https://github.com/felixangell/krug https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/ It's still too early to judge, but from the little I've seen of it, it seems nothing more than just a rehash of C with a slightly different syntax. It wasn't clear from the docs what exactly it brings to the table that isn't already done in C, or any other language. T Author specified that it's just a hobby project.
Re: lazy evaluation of logical operators in enum definition
On Wednesday, 18 April 2018 at 10:19:20 UTC, Atila Neves wrote: On Wednesday, 18 April 2018 at 04:44:23 UTC, Shachar Shemesh wrote: On 17/04/18 13:59, Simen Kjærås wrote: [...] Also, extremely dangerous. Seriously, guys and gals. __traits(compiles) (and its uglier sibling, is(typeof())) should be used *extremely* sparingly. [...] A very good rule of thumb. I've lost of how many times I've had a bug because of __traits(compiles) being false, but not in the way I expected it to be! Let me third that. Although this would not be as big of a problem if we had a way of printing out the values for failed template constraints (isn't this already done for static if now?)
Re: PR duty
On Wednesday, 4 April 2018 at 05:31:10 UTC, Andrei Alexandrescu wrote: Hi folks, I was thinking of the following. To keep the PR queue trim and in good shape, we'd need at least one full-time engineer minding it. I've done that occasionally, and the queue size got shorter, but I couldn't do much else during that time. I was thinking, we can't afford a full-time engineer, and even if we did, we'd probably have other important matters for that engineer as well. However, what we can afford - and indeed already benefit from - is a quantum of time from each of many volunteers. By organizing that time better we may be able to get more output. Here's what I'm thinking. Let's define a "PR duty" role that is one week long for each of a pool of volunteers. During that week, the person on PR duty focuses on minding github queues - merge trivial PRs, ping authors of old PRs, email decision makers for specific items in PRs, etc. Then the week ends and the role is handed off to the next person in the pool. A calendar maintained by an impartial person - maybe we can ask Mike - would keep track of everything. On the actual topic of the thread, the scrum master for this sprint could have some of these duties rolled in as well. Therefore, for a given sprint, they would be in charge of: - taking care of trivial PRs - pinging authors of old PRs(?) - email decision makers - make sure previously decided-upon action items are taken care of before the next sprint - being a central point of contact for questions on items for that sprint - generally being available on Slack and coordinating the team I wonder if the Github bot can be configured to automatically tag new items for the next sprint... Also, I put a question mark beside "pinging authors of old PRs" because that seems like something the bot could also do automatically (maybe ping every 2 weeks, and if the submitter has not responded after 3 pings it's auto-closed).
Re: PR duty
On Wednesday, 4 April 2018 at 05:31:10 UTC, Andrei Alexandrescu wrote: Hi folks, I was thinking of the following. To keep the PR queue trim and in good shape, we'd need at least one full-time engineer minding it. I've done that occasionally, and the queue size got shorter, but I couldn't do much else during that time. I was thinking, we can't afford a full-time engineer, and even if we did, we'd probably have other important matters for that engineer as well. However, what we can afford - and indeed already benefit from - is a quantum of time from each of many volunteers. By organizing that time better we may be able to get more output. Here's what I'm thinking. Let's define a "PR duty" role that is one week long for each of a pool of volunteers. During that week, the person on PR duty focuses on minding github queues - merge trivial PRs, ping authors of old PRs, email decision makers for specific items in PRs, etc. Then the week ends and the role is handed off to the next person in the pool. A calendar maintained by an impartial person - maybe we can ask Mike - would keep track of everything. The most obvious candidates for PR duty engineers would be the most prolific contributors in the respective repositories. One question would be how many distinct pools/tracks we should have. Presumably someone fluent with phobos is not necessarily fluent with dmd. So probably we need at least two tracks: * dmd * everything else (druntime, phobos, tools, site) If there are a dozen of us in each pool, each would be on duty one week every three months. Even with eight, we'd be on duty a manageable week every other month. Please share your thoughts. Thanks, Andrei Something adjacent but related that I've been wanting to suggest for awhile is that we hold a semi-weekly or weekly scrum for Phobos and/or dmd (with possibly overlapping but not necessarily identical groups). I'm thinking that you would attend the Phobos one and Walter the dmd one (I don't think it'd be necessary to have you at _every_ meeting). The idea is that we go over each PR opened that week and decide what to do with them (merge, close, needs guidance, needs W/A decision, etc.) The scrum master that sprint (which could rotate between people) is responsible for getting a decision from you or Walter on items (if you're not at the scrum), ensuring action items are followed up on by those they're assigned to, just generally coordination and administrative tasks... The goal is to improve the team's velocity such that we are handling every PR that comes in for that week, and then start eating into the backlog. IMO that backlog should be prioritized, but I'm not certain how to go about that yet. Anyway, end result is that hopefully we get the queue down and are for the most part just handling new PRs each week. A big component of this is you being available for, say, at least one meeting a month. That being said, one big stumbling block I see is people being located in wildly different timezones, such as Sebastian in Germany, you on the east coast, Mike in Korea, etc.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 04:50:15 UTC, rumbu wrote: On Monday, 2 April 2018 at 22:55:58 UTC, Meta wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces. HPC# allows interface inheritance, but does not box structs. It's clearly stated in the video (15:30). In fact, boxing would bring up the GC, and GC is not allowed in HPC#. Oh, that's really neat (was on mobile and could not watch the video).
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 22:55:58 UTC, Meta wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces. To clarify, the struct will be boxed when passing it to a function that accepts an IFoo, or if you do `IFoo foo = someStruct` or the like.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces.
Re: newCTFE Status March 2018
On Friday, 30 March 2018 at 19:48:02 UTC, Stefan Koch wrote: Hello Guys, I took a few days off over easter and I have very good news for you. The following code will now compile and execute correctly using newCTFE. --- class C { int i() {return 1;} } class D : C { override int i() {return 2;} float f() { return 1.0f; } } class E : D { override int i() {return 3;} override float f() { return 2.0f; } } int testClassStuff () { C c1, c2, c3; D c4; c1 = new C(); c2 = new D(); c3 = new E(); D e = new E(); assert(cast(int)e.f() == 2); return c1.i + c2.i + c3.i; } static assert(testClassStuff == 1 + 2 + 3); --- In short this means that classes and virtual function calls work now. albeit currently only if you don't define your own constructor, which would currently get treated as normal function and therefore not set the vtbl pointer correctly. I'd also like to note that the vtbl handling is backend independent which means that you code your own backend for newCTFE without having to deal with the fact that vtbl and constructor stuff is going on. To you It's just load store and call. :) Have a nice easter. Stefan newCTFE is looking very cool. Glad to see you're still working at it.
Re: D compiles fast, right? Right??
On Friday, 30 March 2018 at 16:12:44 UTC, Atila Neves wrote: Fast code fast, they said. It'll be fun, they said. Here's a D file: import std.path; Yep, that's all there is to it. Let's compile it on my laptop: /tmp % time dmd -c foo.d dmd -c foo.d 0.12s user 0.02s system 98% cpu 0.139 total That... doesn't seem too fast to me. But wait, there's more: /tmp % time dmd -c -unittest foo.d dmd -c -unittest foo.d 0.46s user 0.06s system 99% cpu 0.525 total Half. A. Second. AKA "an eternity" in dog years, err, CPU time. I know this has been brought up before, and recently even, but, just... just... sigh. So I wondered how fast it'd be in Go, since it's got a reputation for speedy compilation: package foo import "path" func Foo() string { return path.Base("foo") } /tmp % time go tool compile foo.go go tool compile foo.go 0.01s user 0.01s system 117% cpu 0.012 total See, now that's what I'd consider fast. It has actual code in the file because otherwise it complains the file isn't using the imported package, because, Go things. It compiled so fast I had to check I'd generated an object file, and then I learned you can't use objdump on Go .o files, because... more Go things (go tool objdump for the curious). Ok, so how about C++, surely that will make D look good? #include // yes, also a one-liner /tmp % time /usr/bin/clang++ -std=c++17 -c foo.cpp /usr/bin/clang++ -std=c++17 -c foo.cpp 0.45s user 0.03s system 96% cpu 0.494 total /tmp % time /usr/bin/g++ -std=c++17 -c foo.cpp /usr/bin/g++ -std=c++17 -c foo.cpp 0.39s user 0.04s system 99% cpu 0.429 total So yeeah. If one is compiling unit tests, which I happen to pretty much only exclusively do, then trying to do anything with paths in D is 1. Comparable to C++ in build times 2. Actually _slower_ than C++ (who'd've thunk it?) * 3. Gets lapped around Captain America vs The Falcon style about 50 times by Go. And that's assuming there's a crazy D programmer out there (hint: me) that actually tries to compile minimal units at a time (with actual dependency tracking!) instead of the whole project at once, otherwise it'll take even longer. And this to just import `std.path`, then there's the actual work you were trying to get to. Today actually made me want to write Go. I'm going to take a shower now. Atila * Building a whole project in C++ still takes a lot longer since D scales much better, but that's not my typical worflow, nor should it be anyone else's. Yeah, that's pretty bad, relatively, for a no-op build. Probably some CTFE or template stuff that gets pulled in by one of the imports.
Re: does it scale to have 1 person approve of all phobos additions?
On Wednesday, 21 March 2018 at 21:25:55 UTC, Andrei Alexandrescu wrote: On 03/20/2018 06:56 PM, Meta wrote: Does it make sense? In my opinion, no, but according to Andrei be has tried being less hands-on before and it resulted in measurably worse quality code in Phobos; thus, he re-established himself as the gatekeeper. I agree that it doesn't scale and think that at this point, it's probably actively hurting Phobos because a lot of good work sits for so long and eventually becomes abandoned. On the other hand, it could become much worse for Phobos if he was entirely hands off and delegated its shepherding to a larger group of core contributors. A balance has to be struck somewhere... Maybe a hypothetical group like this needs to be trained by Andrei such that he can trust them to properly guide Phobos' development, and will only come to him with the really big, important stuff. Thanks for this comment (which is eerily accurate), and thanks Timothee for raising the matter. It is an ongoing burden to be the decider on new API additions to Phobos; indeed I have taken this responsibility because I have attempted to relinquish it in the past, with negative results. It is definitely not something that I prefer or enjoy, and am permanently on the lookout for people with similar design sensibilities to share the burden with. The door is open, if not kicked off its hinges. Please take note! That said, the question of scalability is a bit misplaced. API additions to Phobos are rare and long-lasting; it is entirely appropriate to let them ripe a little. In contrast, various improvements to Phobos - over 100 of them - only need good reviews, and are obviously bottlenecked by our general lack of reviewers. That's our real bottleneck. It seems appropriate to ask the question why we'd ask for acceleration of API additions without improving response for other work. I just reviewed https://github.com/dlang/phobos/pull/6178. As I'd expected, it's good work - which is exactly the matter. Good work in a submission means most review work. It's not bad work, which can be easily rejected. And it's not brilliant work, which can be easily accepted. The PR has bugs and quality issues that any reviewer could find and provide feedback on. It's not in the state where it's bottlenecked by just a stamp of approval. Naming is a matter I wanted to defer having a debate on. We should call the facility staticArray to prevent an imaginary conversation like this: Q: "So I have a range here, how do I get an array from it?" A: "Easy, just append .array to it and you're done." Q. "Cool! Now I need a static array. Wait! Don't tell me, don't tell me... staticArray is what I should look for!" A: "Um, no, sorry. That's called asStatic." Besides, [1,2].asStatic may be guessed right by a reader, but myrange.asStatic!2 most likely not. Thanks, Andrei Thanks. I stewed on this for a few days, and now it's 3 AM and I wrote a long reply but deleted it. I agree with most of what you you've said, and am progressively agreeing less with what I said. Mostly, I'm just frustrated and don't really have any good solutions but the PR queue keeps growing. I'll go review something.
Re: D, Parasail, Pascal, and Rust vs The Steelman
On Thursday, 22 March 2018 at 11:58:02 UTC, Shachar Shemesh wrote: Interesting that the author's criticism of Rust lines up very closely with Andrei's. Spoken on the forum for a language that has still not managed to make sure that a destructor actually gets called every time an object is destroyed. Shachar Just an observation. I wasn't criticizing Rust.
Re: D, Parasail, Pascal, and Rust vs The Steelman
On Wednesday, 21 March 2018 at 12:52:19 UTC, Paulo Pinto wrote: An article comparing the above languages as per the DoD language requirements [0]. http://jedbarber.id.au/steelman.html [0] - https://en.wikipedia.org/wiki/Steelman_language_requirements "The central failure of the language is the myopic focus on the affine typing solution to heap allocation and thread safety. The creators do not seem to realise that other solutions already exist, and that dynamic memory allocation is not the only safety issue a programmer has to cope with." Interesting that the author's criticism of Rust lines up very closely with Andrei's.
Re: Flaw in DIP1000? Returning a Result Struct in DIP1000
On Wednesday, 21 March 2018 at 17:13:40 UTC, Jack Stouffer wrote: Consider this example simplified from this PR https://github.com/dlang/phobos/pull/6281 -- struct GetoptResult { Option[] options; } struct Option { string optShort; string help; } GetoptResult getopt(T...)(scope T opts) @safe { GetoptResult res; auto o = Option(opts[0], opts[1]); res.options ~= o; return res; } void main() @safe { bool arg; getopt("arg", "info", &arg); } -- $ dmd -dip1000 -run main.d -- main.d(16): Error: scope variable o assigned to non-scope res main.d(23): Error: template instance `onlineapp.getopt!(string, string, bool*)` error instantiating -- The only way I've found to make the code compile and retain the pre-dip1000 behavior is to change the Option construction to -- auto o = Option(opts[0].idup, opts[1].idup); -- How can we return non-scoped result variables constructed from scope variables without copies? I thought that maybe adding a function to Option and marking it as `scope` would work: struct GetoptResult { Option[] options; void addOptions(scope Option opt) @safe scope { //Error: scope variable opt assigned to non-scope this options ~= opt; } } But the compiler doesn't like that. However, I _did_ get it working by doing this: GetoptResult getopt(T...)(scope T opts) @safe { return GetoptResult([Option(opts[0], opts[1])]); } Which is not ideal, obviously, but the notion that some code has to be rewritten to accomodate ownership semantics is not a new one; one of the major complaints I've seen about Rust is that it requires you to adjust your coding style to satisfy the borrow checker.
Re: does it scale to have 1 person approve of all phobos additions?
Does it make sense? In my opinion, no, but according to Andrei be has tried being less hands-on before and it resulted in measurably worse quality code in Phobos; thus, he re-established himself as the gatekeeper. I agree that it doesn't scale and think that at this point, it's probably actively hurting Phobos because a lot of good work sits for so long and eventually becomes abandoned. On the other hand, it could become much worse for Phobos if he was entirely hands off and delegated its shepherding to a larger group of core contributors. A balance has to be struck somewhere... Maybe a hypothetical group like this needs to be trained by Andrei such that he can trust them to properly guide Phobos' development, and will only come to him with the really big, important stuff. By the same measure, I feel that Walter is becoming a bottleneck on dmd development and maybe a similar solution is necessary.
Re: Bachelor level projects
On Tuesday, 20 March 2018 at 14:44:39 UTC, Alexandru Jercaianu wrote: Hello, At the Polytechnic University of Bucharest we are organizing a special program called CDL[1], where Bachelor students are mentored to make their first open source contributions. I think it's a great idea to involve D in this program, but for this to be successful, I need your help in finding ideas for Bachelor level projects, which can be solved until the end of May (anything from new features to more impactful bugs). If there is anything on your wish list which matches the criteria above, feel free to share. Thanks, Alex J Multiple alias this is a good one. The prospective student even has a starting point to work off of: https://github.com/dlang/dmd/pull/3998 (out of date and written against the old C++ compiler, though).
Re: rvalue types
On Tuesday, 13 March 2018 at 17:33:14 UTC, H. S. Teoh wrote: I think the general idea is a good approach, and it seems that ultimately we're just reinventing expression DSLs. Overloading built-in operators works up to a point, and then you really want to just use a string DSL, parse that in CTFE and use mixin to codegen. That frees you from the spaghetti template expansions in expression templates, and also frees you from being limited by built-in operators, precedence, and syntax. IMO one of the advantages that Dmitry's approach has is that you don't have to do the lexing during CTFE, which may slow things down even more. It's already done for you by the user.
Re: D course material
On Tuesday, 13 March 2018 at 12:39:24 UTC, Dmitry Olshansky wrote: Hi, folks! I’m testing waters for a D course at one University for first time it’ll be an optional thing. It’s still discussed but may very well become a reality. Before you ask - no, I’m not lecturing and in fact, I didn’t suggest D in the first place! Academics are finally seeing light in the gloom of 1 year OOP in C++ course having underwhelming results. Now to the point, I remeber Chuck Allison (pardon if I misspelled) doing D lectures at Utah Valley University, here: https://dconf.org/2014/talks/allison.html There is also Ali’s book. But anything else easily adoptable as course material? — Dmitry Olshansky Honestly I'd recommend TDPL. It's got a lot of good real-world examples, including some OOP ones, but more importantly examples that demonstrate concurrent programming, generic programming, procedural, and I think a few functional examples as well. Basically, it covers a very broad area in one book while also teaching you D.
Re: Fact check: when did D add static if?
On Thursday, 8 March 2018 at 04:12:12 UTC, Adam D. Ruppe wrote: On Thursday, 8 March 2018 at 04:09:17 UTC, Meta wrote: Has D had static if since its inception, or was it added somewhere along the way? https://digitalmars.com/d/1.0/changelog1.html What's New for D 0.124 May 19, 2005 New/Changed Features *snip* Added static if. That's before my time! But by looking at the changelog, static if was pretty weak early on and expanded over time. Thanks Adam, that was exactly what I was looking for.
Fact check: when did D add static if?
Has D had static if since its inception, or was it added somewhere along the way?
#dbugfix: Unclear error message when trying to inherit from multiple classes
class Test: Foo, Bar, Baz { } class Foo {} class Bar {} class Baz {} Error: class `Test` base type must be interface, not Bar Error: class `Test` base type must be interface, not Baz I thought this error message used to be a lot better; along the lines of "D does not support multiple inheritance. Use interfaces instead." It'd be nice if it was changed to be clearer about what the error is. https://issues.dlang.org/show_bug.cgi?id=18574
Re: implicit construction operator
On Monday, 26 February 2018 at 19:25:06 UTC, WebFreak001 wrote: Now this would be really useful for Variant: --- struct Variant { this(U)(U value) @implicit { ... } } void bar(Variant x, Variant y) {} Variant[] myObjects = [1, 2, "abc", new Node()]; Variant a = 4; bar(4, "asdf"); --- This is possible in the language today using the implicit class construction feature of runtime variadic arrays: class VArray { Variant[] va; this(T...)(T ts) { foreach(t; ts) { va ~= Variant(t); } } } void test(VArray ta...) { foreach (v; ta.va) { writeln(v.type); } } void main() { test(1, "asdf", false); } What's your opinion on this? This is a very slippery slope to fall down. Even `alias this` is pushing the limit of what I think we should allow. That said, there is exactly 1 case where I really, really want some kind of implicit conversion: struct Success {} struct Error { string msg; } alias Result = Algebraic!(Success, Error); Result connectToHost(IPAddress host) { //do some stuff if (operationSucceeded) { return Success(); } else { return Error(statusMessage); } } This currently doesn't work, and you instead have to return Result(Success()) and Result(Error(statusMessage)). I would love to have some way of implicitly constructing an Algebraic from any of the possible underlying types. It would bring us very close (if not all the way) to having Algebraic work identically to sum types in other languages such as Rust, Swift, Haskell, etc. Having to explicitly wrapping the values in an Algebraic doesn't seem like a big deal, but it makes it really annoying to use it in everyday code.
Re: Aliasing multiple delegates to the same name - very strange behaviour
On Sunday, 25 February 2018 at 08:07:03 UTC, user1234 wrote: On Sunday, 25 February 2018 at 05:16:21 UTC, Meta wrote: On Sunday, 25 February 2018 at 04:59:58 UTC, Basile B. wrote: Use templates to prevent implicit conversion: alias f(T = int) = (T n) => 0; alias f(T = char) = (T n) => 'a'; alias f(T = bool) = (T n) => false; Bug report is invalid and can be closed. Please don't be so hasty. The main focus of that defect is whether it is a bug or a feature that the same alias can be declared multiple times. I've updated the title to reflect that. Aliases are not things, they are what they alias. In your case all are functions so this is an overload set. I was about to say that no such syntax for creating an overload set exists, but I found this tucked away in the documentation (https://dlang.org/spec/function.html#overload-sets): Overload sets can be merged with an alias declaration: import A; import B; alias foo = A.foo; alias foo = B.foo; void bar(C c) { foo();// calls A.foo() foo(1L); // calls A.foo(long) foo(c); // calls B.foo(C) foo(1,2); // error, does not match any foo foo(1); // calls B.foo(int) A.foo(1); // calls A.foo(long) } So it looks like this *is* valid syntax, in which case the question becomes, is it intended behaviour that you can use the overload set syntax with function literals? I can't think of any possible method of assigning the same name to different literals. Where is the bug? Being able to add function literals to an overload set, or the fact that the set only contains the first function and none of the subsequently added ones?
Re: Aliasing multiple delegates to the same name - very strange behaviour
On Sunday, 25 February 2018 at 04:59:58 UTC, Basile B. wrote: Use templates to prevent implicit conversion: alias f(T = int) = (T n) => 0; alias f(T = char) = (T n) => 'a'; alias f(T = bool) = (T n) => false; Bug report is invalid and can be closed. Please don't be so hasty. The main focus of that defect is whether it is a bug or a feature that the same alias can be declared multiple times. I've updated the title to reflect that.
Re: Aliasing multiple delegates to the same name - very strange behaviour
On Sunday, 25 February 2018 at 04:47:47 UTC, Nicholas Wilson wrote: On Sunday, 25 February 2018 at 04:06:43 UTC, Meta wrote: I just filed this bug: https://issues.dlang.org/show_bug.cgi?id=18520 Not only does the following code compile and link successfully, it prints 0 three times when ran: alias f = (int n) => 0; alias f = (char c) => 'a'; alias f = (bool b) => false; void main() { import std.stdio; writeln(f(int.init)); //Prints 0 writeln(f(char.init)); //Prints 0 writeln(f(bool.init)); //Prints 0 } [...] 4. Is there any different semantically or mechanically between my first and second examples? Type promotions to int maybe? Have you tried casting them? void main() { import std.stdio; writeln(f(cast(int)int.init)); writeln(f(cast(char)char.init)); writeln(f(cast(bool)bool.init)); } Ah, I tried changing it to the following: struct NoPromote {} alias f = (int n) => 0; alias f = (char c) => 'a'; alias f = (NoPromote np) => NoPromote(); void main() { import std.stdio; writeln(f(int.init)); //Prints 0 writeln(f(char.init)); //Prints 0 writeln(f(NoPromote.init)); //Prints 0 } And I get "Error: function literal __lambda5 (int n) is not callable using argument types (NoPromote)". It was already apparent from the fact that the program printed 0 each time, but this confirms that the first function literal is the only one that _really_ gets aliased to f. Actually, this is unnecessary, because if I just change the order and move the bool function up to be the first, I get "Error: function literal __lambda4 (bool b) is not callable using argument types (char)". Did I mention how much I hate the fact that char and bool implicitly convert to int?
Aliasing multiple delegates to the same name - very strange behaviour
I just filed this bug: https://issues.dlang.org/show_bug.cgi?id=18520 Not only does the following code compile and link successfully, it prints 0 three times when ran: alias f = (int n) => 0; alias f = (char c) => 'a'; alias f = (bool b) => false; void main() { import std.stdio; writeln(f(int.init)); //Prints 0 writeln(f(char.init)); //Prints 0 writeln(f(bool.init)); //Prints 0 } However, when I change the code to the following, it works as one could reasonably expect, given the circumstances: int f1(int n) { return 0; } char f2(char c) { return 'a'; } bool f3(bool b) { return false; } alias f = f1; alias f = f2; alias f = f3; void main() { import std.stdio; writeln(f(int.init)); //Prints 0 writeln(f(char.init)); //Prints 'a' writeln(f(bool.init)); //Prints false } I've got some questions: 1. Which is the intended behaviour? Should this code fail to compile and there's a bug with aliases, or should this code compile and my first example work correctly, but there is currently a bug where this feature interacts badly with function/delegate literals? 2. If the answer to 1 is "this could should compile and work correctly", in what cases does D allow multiple aliases with the same name to be defined, as in my first and second example (which compile without issue)? 3. Is this related to overload sets in some way? 4. Is there any different semantically or mechanically between my first and second examples?
Re: Typedef.toString?
On Friday, 23 February 2018 at 13:56:35 UTC, Denis F wrote: On Thursday, 22 February 2018 at 20:26:17 UTC, Meta wrote: find all this inclusions. Maybe it is need to implement simple toString method inside of Typedef struct? Or just disable it at all? Yes. I doubt this behaviour is intended, so it's likely an oversight in Typedef's implementation. Should be disabled also: factory opCmp opEquals toHash ? No, Typedef just needs a custom implementation of `toString`.
Re: Typedef.toString?
On Thursday, 22 February 2018 at 19:56:13 UTC, Denis F wrote: Hello! After replacing native type by std.typecons.Typedef I am faced with fact what all typeDefValue.to!string was silently changed its output to output of Typedef struct itself. It was too hard find all this inclusions. Maybe it is need to implement simple toString method inside of Typedef struct? Or just disable it at all? Yes. I doubt this behaviour is intended, so it's likely an oversight in Typedef's implementation.
Re: how to get typeid of extern(C++) classes?
On Friday, 16 February 2018 at 00:42:02 UTC, Timothee Cour wrote: C++ exposes it via typeid so in theory all the info is there ; It's been awhile since I've written any C++ code, but as I remember it, this type of type info is not available unless you enable it with a (C++) compiler switch.
Re: OT: Photo of a single atom by David Nadlinger wins top prize
On Wednesday, 14 February 2018 at 16:45:49 UTC, user1234 wrote: On Wednesday, 14 February 2018 at 01:11:33 UTC, David Nadlinger wrote: On Tuesday, 13 February 2018 at 23:09:07 UTC, Ali Çehreli wrote: David (aka klickverbot) is a longtime D contributor […] … who is slightly surprised at the amount of media interest this has attracted. ;) — David Damn it's on top of Google News, Category Science, for U.S ! https://imgur.com/a/U5DYA "A scientist captured an impossible photo of a single atom" Gotta love clickbait articles. It's by definition possible because... David's done it.
Re: Workaround for https://issues.dlang.org/show_bug.cgi?id=18422?
On Sunday, 11 February 2018 at 15:34:07 UTC, Andrei Alexandrescu wrote: I'm trying to sketch a simple compile-time reflection system, and https://issues.dlang.org/show_bug.cgi?id=18422 is a blocker of the entire approach. My intent is to have a struct Module, which can be initialized with a module name; then: struct Module { private string name; Data[] data(); // all data declarations Function[] functions(); Struct[] structs(); Class[] classes(); Union[] unions(); Enum[] enums(); } Then each of those types carries the appropriate information. Notably, there are no templates involved, although all code is evaluated during compilation. Non-data information (types, qualifiers etc) is carried as strings. This allows for simple arrays to convey heterogeneous information such as "all functions in this module", even though their signatures are different. This makes for a simple and easy to use system for introspecting things during compilation. Clearly in order to do that some of these compile-time strings must be mixed in, which is why https://issues.dlang.org/show_bug.cgi?id=18422 is so problematic. Until we discuss a fix, are there any workarounds? Thanks, Andrei If you need a workaround that doesn't have a struct or function templated on `name`, then no, I don't think there is. The problem is that there are two different kinds of compile time: https://wiki.dlang.org/User:Quickfur/Compile-time_vs._compile-time
Re: Which language futures make D overcompicated?
On Friday, 9 February 2018 at 07:54:49 UTC, Suliman wrote: I like D, but sometimes it's look like for me too complicated. Go have a lot of fans even it not simple, but primitive. But some D futures make it very hard to learning. Small list by me: 1. mixins 2. inout 3. too many attributes like: @safe @system @nogc etc Which language futures by your opinion make D harder? I can't say that I've ever really found D complicated. I think the main reason for that is because my first language was C++, and there's really nowhere to go but up from there (I was experienced with a few other languages as well like Java, Scheme, Basic, etc. but none I would regard as complex). I think the perception of D being complicated is more from programmers coming from Python/Ruby/JS (and to a lesser extent, Haskell/Scheme/Java). D is quite different if you're coming from a "VM" or "scripting" language because it exposes you to a lot of new concepts such as static typing, value types, templates, monomorphization, immutability, memory layout, linking and compilation, compile-time vs. runtime, etc. It's not that these programmers are less skilled or less knowledgeable; it's that if they've never used a language that has forced them to consider these concepts, then it looks to them like D is a massive step up in complexity compared to the language that they're used to. I think if you asked 100 C++ programmers whether they thought D was a complicated language, 99 of them would say no. If you ask 100 Python programmers, 99 would probably say yes.
Re: Which language futures make D overcompicated?
On Friday, 9 February 2018 at 17:31:47 UTC, Adam D. Ruppe wrote: On Friday, 9 February 2018 at 16:44:32 UTC, Seb wrote: Forget inout, it's seldomly used and there have even attempts to remove it from the language. inout rox. I think this is more of a documentation discoverability problem. We should be having people read the spec, which is written toward compiler authors [!], when they want to just know how to use it. Here's the basic rules of thumb: If you don't need to change a variable: 1) use immutable when declaring a new variable immutable myvar = "never gonna change"; 2) if you are returning a member variable or function argument, use inout on both class myclass { Object member; inout(Object) getMember() inout { return member; } } inout(char)* identity(inout(char)* s) { return s; } My main issue with inout is the following: struct Option(T) { bool isNull; T payload; this(inout(T) val) inout { payload = val; } bool opEquals(inout(T) val) inout { return !this.isNull && (this.get() == val); } inout(T) get() inout { return payload; } } struct InoutHeaven { int n; } void main() { immutable Option!InoutHeaven v1 = InoutHeaven(1); assert(v1 == InoutHeaven(1)); } Everything is fine until InoutHeaven defines a custom opEquals: struct InoutHell { int n; bool opEquals(InoutHell other) { return n == other.n; } } void main() { immutable Option!InoutHell v1 = InoutHell(1); //Welcome to Inout Hell >:^) //Error: mutable method onlineapp.InoutHell.opEquals is not callable using a inout object assert(v1 == InoutHell(1)); } The really frustrating thing is that as far as I know, there's nothing you can do if you don't have control over the wrapped type. If you can't add your own inout or const opEquals method, you're screwed. I might be wrong about this though, as I think Steven has debunked this on at least one occasion. However, I can't remember what his solution was, if there was one.
Re: Which language futures make D overcompicated?
On Friday, 9 February 2018 at 18:21:55 UTC, Bo wrote: * scope() .. just call it "defer" just as every other language now does. It only confuses people who come from other languages. Its now almost a standard. By using scope people have have no clue that D has a defer. Took even me a while to know that D had a defer system in place. The funny thing is that D had this feature long before any other language that I can think of (of course Lisp has probably had 6 different implementations of it since 1972). They're the ones that need to get with the program ;-)
Re: Language Idea #6892: in array ops, enable mixing slices and random access ranges
On Monday, 5 February 2018 at 17:35:45 UTC, Guillaume Piolat wrote: General idea Currently arrays ops express loops over slices. a[] = b[] * 2 + c[] It would be nice if one could mix a random access range into such an expression. The compiler would have builtin support for random access range. Example === -->3--- import std.algorithm; import std.array; import std.range; void main() { int[] A = [1, 2, 3]; // arrays ops only work with slices A[] += iota(3).array[]; // Check that iota is a random access range auto myRange = iota(3); static assert( isRandomAccessRange!(typeof(myRange)) ); // Doesn't work, array ops can't mix random access ranges and slices // NEW A[] += myRange[]; // whatever syntax could help the compiler } -->3--- How it could work = A[] += myRange[]; // or another syntax for "myRange as an array op operand" would be rewritten to: foreach(i; 0..A.length) A[i] += myRange[i]; myRange should not be a range without "length". Why? Bridges a gap between lazy generation and array ops, now that array ops are reliable. Allow arrays ops to take slice-like objects. What do you think? It's already possible, with only very slightly worse aesthetics: struct VecOp(T) { T[] arr; pragma(inline, true) T[] opOpAssign(string op: "+", Range)(Range r) { int i; foreach (e; r) { arr[i] += e; i++; } return arr; } } pragma(inline, true) VecOp!E vecOp(E)(return E[] arr) { return typeof(return)(arr); } void main() { import std.range: iota; int[] a = [1, 2, 3]; a.vecOp += iota(3); assert(a == [1, 3, 5]); } I'm not very good at reading assembly, so I have no idea whether it's comparable to doing `a[] += [0, 1, 2]`.
Re: Annoyance with new integer promotion deprecations
On Tuesday, 6 February 2018 at 00:18:08 UTC, Jonathan M Davis wrote: On Monday, February 05, 2018 15:27:45 H. S. Teoh via Digitalmars-d wrote: On Mon, Feb 05, 2018 at 01:56:33PM -0800, Walter Bright via Digitalmars-d wrote: > The idea is a byte can be implicitly converted to a dchar, > [...] This is the root of the problem. Character types should never have been implicitly convertible to/from arithmetic integral types in the first place. +1 Occasionally, it's useful, but in most cases, it just causes bugs - especially when you consider stuff like appending to a string. - Jonathan M Davis I remember a fairly old defect, or maybe it was just a post in the Learn forum. Doing "string" ~ 0 would append '\0' to a string, because the int was auto-converted to a char. This still works today: import std.stdio; void main() { string msg = "Hello" ~ 0 ~ " D" ~ '\0'; writeln(msg); writeln(cast(ubyte[])msg); writeln(cast(ubyte[])"Hello D"); }
Re: Quora: Why hasn't D started to replace C++?
On Wednesday, 31 January 2018 at 11:42:14 UTC, Seb wrote: Yes, obviously the current situation isn't ideal, but it's not too bad either and we have found one good, but probably not so well-known yet way to tackle this: the dlang-community organization on GH (https://github.com/dlang-community). A lot of important, but more or less abandoned repositories have been adopted, s.t. there's a common place to submit bug fixes and feature PRs and its ensured by CIs that they are always in a good state, e.g. always compile with the latest DMD. Wait, have libdparse et al. been abandoned? What happened to Brian?
Re: The most confusing error message
On Wednesday, 24 January 2018 at 07:21:09 UTC, Shachar Shemesh wrote: test.d(6): Error: struct test.A(int var = 3) is used as a type Of course it is. That's how structs are used. Program causing this: struct A(int var = 3) { int a; } void main() { A a; } To resolve, you need to change A into A!(). For some reason I have not been able to fathom, default template parameters on structs don't work like they do on functions. IMO the error message is not too bad once you understand what's going on (which probably means it's really not a good error message). struct A(int var = 3) is short for: template A(int var = 3) { struct A { //... } } As I'm sure you know. If it's written out like this then I think it makes it obvious what the problem is. When you write `A a`, you're trying to use this template like a type, but templates are not types; they are used to _construct_ types. Therefore, to construct a valid type from the template A, you have to instantiate it by declaring an `A!() a`. The compiler could allow implicit insatntiation if the template has 0 arguments or only default arguments, but as Jonathan showed, this causes a lot of ambiguous cases that are better avoided. One way we could probably improve the error message is to change it to "template struct test.A(int var = 3) is used as a type. It must be instantiated", or something along those lines, to make it clear why you can't use A as a type.
Re: static foreach and new identifier names
On Friday, 5 January 2018 at 23:52:43 UTC, Meta wrote: On Friday, 5 January 2018 at 23:50:52 UTC, Meta wrote: On Friday, 5 January 2018 at 17:41:23 UTC, Adam D. Ruppe wrote: Make a special identifier known the compiler, let's call it `__unique_name` which is unique for any static foreach iteration. You can emulate it by abusing the compiler-generated random names for lambdas: enum uniqueName(string cookie = {}.stringof) = cookie; But that won't work for what you want. Never mind me. Oho, template mixins to the rescue. With this you can auto generate all the new symbols you want and the syntax isn't too ugly. mixin template uniqueName(DeclType, string cookie = {}.stringof) { mixin(`DeclType ` ~ cookie ~ `;`); pragma(msg, cookie); } void main() { static foreach (i; 0..50) { mixin uniqueName!int; mixin uniqueName!int; } } This prints: __lambda5 __lambda6 __lambda7 __lambda8 __lambda9 __lambda10 __lambda11 __lambda12 __lambda13 __lambda14 __lambda15 __lambda16 __lambda17 __lambda18 __lambda19 __lambda20 __lambda21 __lambda22 __lambda23 __lambda24 __lambda25 __lambda26 __lambda27 __lambda28 __lambda29 __lambda30 __lambda31 __lambda32 __lambda33 __lambda34 __lambda35 __lambda36 __lambda37 __lambda38 __lambda39 __lambda40 __lambda41 __lambda42 __lambda43 __lambda44 __lambda45 __lambda46 __lambda47 __lambda48 __lambda49 __lambda50 __lambda51 __lambda52 __lambda53 __lambda54 __lambda55 __lambda56 __lambda57 __lambda58 __lambda59 __lambda60 __lambda61 __lambda62 __lambda63 __lambda64 __lambda65 __lambda66 __lambda67 __lambda68 __lambda69 __lambda70 __lambda71 __lambda72 __lambda73 __lambda74 __lambda75 __lambda76 __lambda77 __lambda78 __lambda79 __lambda80 __lambda81 __lambda82 __lambda83 __lambda84 __lambda85 __lambda86 __lambda87 __lambda88 __lambda89 __lambda90 __lambda91 __lambda92 __lambda93 __lambda94 __lambda95 __lambda96 __lambda97 __lambda98 __lambda99 __lambda100 __lambda101 __lambda102 __lambda103 __lambda104
Re: static foreach and new identifier names
On Friday, 5 January 2018 at 17:41:23 UTC, Adam D. Ruppe wrote: Make a special identifier known the compiler, let's call it `__unique_name` which is unique for any static foreach iteration. You can emulate it by abusing the compiler-generated random names for lambdas: enum uniqueName(string cookie = {}.stringof) = cookie;
Re: static foreach and new identifier names
On Friday, 5 January 2018 at 23:50:52 UTC, Meta wrote: On Friday, 5 January 2018 at 17:41:23 UTC, Adam D. Ruppe wrote: Make a special identifier known the compiler, let's call it `__unique_name` which is unique for any static foreach iteration. You can emulate it by abusing the compiler-generated random names for lambdas: enum uniqueName(string cookie = {}.stringof) = cookie; But that won't work for what you want. Never mind me.
Re: Odd behavior found in GC when terminating application
On Friday, 5 January 2018 at 19:18:59 UTC, 12345swordy wrote: On Friday, 5 January 2018 at 17:14:58 UTC, Meta wrote: On Friday, 5 January 2018 at 15:26:03 UTC, 12345swordy wrote: On Friday, 5 January 2018 at 14:35:44 UTC, Jonathan M Davis wrote: On Friday, January 05, 2018 14:29:41 Adam D. Ruppe via Digitalmars-d wrote: [...] Either that or use structs on the stack instead of classes on the heap so that you're actually using destructors instead of finalizers. - Jonathan M Davis There been discussions regarding spiting up the destruction and finalizers. I can't find the thread (it's a couple years old at this point, I think), but Andrei once proposed removing class destructors and met very heavy resistance so he dropped it. Removing it is not the same thing as spiting them up. I meant it as an addendum
Re: Odd behavior found in GC when terminating application
On Friday, 5 January 2018 at 15:26:03 UTC, 12345swordy wrote: On Friday, 5 January 2018 at 14:35:44 UTC, Jonathan M Davis wrote: On Friday, January 05, 2018 14:29:41 Adam D. Ruppe via Digitalmars-d wrote: [...] Either that or use structs on the stack instead of classes on the heap so that you're actually using destructors instead of finalizers. - Jonathan M Davis There been discussions regarding spiting up the destruction and finalizers. I can't find the thread (it's a couple years old at this point, I think), but Andrei once proposed removing class destructors and met very heavy resistance so he dropped it.
Re: AliasSeq seems to compile slightly faster with static foreach
On Friday, 5 January 2018 at 13:10:25 UTC, Jonathan M Davis wrote: There was a recent PR for Phobos where Seb added static to a bunch of foreach's that used AliasSeq. It hadn't actually occurred to me that that was legal (I've basically just been using static foreach where foreach with AliasSeq doesn't work), but it is legal (which I suppose isn't surprising when you think about it; I just hadn't). However, that got me to wondering if such a change was purely aesthetic or whether it might actually have an impact on build times - particularly since running dub test for one of my recent projects keeps taking longer and longer. So, I added static to a bunch of foreach's over AliasSeqs in that project to see if it would have any effect. The result was that dub test went from about 16.5 seconds on my system to about 15.8 seconds - and that's just by adding static to the foreach's over AliasSeqs, not fundamentally changing what any of the code did. That's not a huge speed up, but it's definitely something and far more than I was expecting. Of course, you have to be careful with such a change, because static foreach doesn't introduce a new scope, and double braces are potentially required where they weren't before, but given that I'd very much like to streamline that test build, adding static to those foreach's was surprisingly worthwhile. Taking it a step further, I tried switching some of the static foreach's over to using array literals, since they held values rather than types, and that seemed to have minimal impact on the time to run dub test. However, by switching to using std.range.only, it suddenly was taking more like 11.8 seconds. So, with a few small changes, I cut the time to run dub test down by almost a third. - Jonathan M Davis It does not make any sense to me as to why using only instead of AliasSeq resulted in a speedup. I would've expected no change or worse performance. Any theories?
Re: What don't you switch to GitHub issues
On Sunday, 31 December 2017 at 11:18:26 UTC, Seb wrote: On Saturday, 30 December 2017 at 02:50:48 UTC, Adam D. Ruppe wrote: Bugzilla was the most well-known solution at the time. Keep in mind the D bugzilla has been around since 2006. As far as I understand it, migration at this point is deemed a big pain. No it wouldn't be a big pain. There are many tools for automatically migrating issues from Bugzilla. The only thing depending on Bugzilla is the changelog generator, but it's API calls to Bugzilla can be replaced with GitHub API calls within an hour. So the entire migration could be easily done in a lot less than a day. The only reason we still use Bugzilla is that the core people are used to it. Here are a couple of the common arguments: 1) Bugzilla is our, we don't want to depend on GitHub The D ecosystem already heavily depends on GitHub. Exporting the issues from GitHub would be easy. Besides there is only one person with access to the Bugzilla server. 2) GitHub only has per registry issues Bugzilla uses components too, they don't support global issues either. Besides if that's required one could easily create a meta repository for such global tasks. 3) Bugzilla's issue tracker is more sophisticated Sure, but does this help when you loose out on many contributors? GitHub even has build tools and sites that let anyone discover "easy" issues if they are labeled accordingly. It's free marketing. FYI I asked the same question 1 1/2 years ago: https://forum.dlang.org/post/ezldcjzpmsnxvvncn...@forum.dlang.org Since then, for example, GitHub got voting for issues, but Bugzilla lost it. I wholeheartedly agree. The customer is always right, especially when you're trying to get them to donate their time to an open source project. It's more essential than ever that we lower barriers to participation; if Github issues is the hip new thing all the kids like, then we need to switch to that. We shouldn't be constantly switching to the shiniest new toy, but nor should we stubbornly stick to a piece of software that was built (and it looks it) in '90s. Or at least we should if we're trying to attract the kind of people for whom not using Github is a deal breaker. Older C++/Java programmers likely don't care, but younger Python/Ruby/JS users will.
Re: What do you want to see for a mature DLang?
On Sunday, 31 December 2017 at 11:27:41 UTC, Seb wrote: Yes, Dlang-bot was able to detect stalled issues for a while, but we didn't turn this on for all repositories. I have just enabled it: https://github.com/dlang-bots/dlang-bot/pull/153 For the moment, it is just labelling issues with e.g. "needs work", "needs rebase", "stalled", "stalled-stable". In a next stage it will start to actively ping people or close PRs. Also automatically rebasing PRs if there are no conflicts is on the radar (the GitHub UI is quite conservative in this regard). Awesome. Thanks for all your hard work on the automation side of things, by the way. It's not glamorous but it's a huge force multiplier.
Re: What do you want to see for a mature DLang?
On Saturday, 30 December 2017 at 14:42:45 UTC, Muld wrote: On Saturday, 30 December 2017 at 06:55:13 UTC, Walter Bright wrote: It's not like we have a shortage of bugzilla issues and are wondering what to do next. Yah there are a ton of Bugzilla issues, that's the problem. More than half of them aren't "actionable" as you put it. Here's the problem, look at something like Rust: Pull requests? 95 open, it's about the same as Dlang, But if you go to the last page... https://github.com/rust-lang/rust/pulls?page=4&q=is%3Apr+is%3Aopen Look at that the oldest one is from October 15th, 20_17_. Now we go to DMD... https://github.com/dlang/dmd/pulls?page=6&q=is%3Apr+is%3Aopen Oldest one is from January 17, 20_13_. This is a problem that many of us are working on fixing. The main reason many of these old zombie PRs stick around is that historically, people are hesitant to close things (for a variety of social reasons, I feel). While there is still the slightest chance that something might someday be merged, it is kept open. Rust is a lot more aggressive about closing bad or outdated PRs and either guiding PRs that need work to get to a mergeable state, or closing them and communicating that this is not the correct way to go. I watched a talk by a Rust contributor specifically on this point awhile ago - they have a bot that does a lot of the PR closure work to get around the fact that people are hesitant to be the "bad guy" and tell someone that their work is not good enough. D needs to get much better at this, and I think things are happening - slowly. The bad optics and demoralizing effect of letting things sit forever without definitive action outweighs the potential loss from being more aggressive about closing or merging.
Re: What is this "dd" doc format?
On Tuesday, 19 December 2017 at 20:20:16 UTC, John Gabriele wrote: I just went looking for the source for the dlang.org overview page, and found [this](https://github.com/dlang/dlang.org/blob/master/overview.dd). I've seen and used a lot of markup formats, but have never run across this. What format is it? Also curious: for what reasons was it chosen over a more customary markup? Thanks. https://dlang.org/spec/ddoc.html When D was first created there was no clear documentation solution (as far as I know. I was not a programmer back then), so Walter invented DDOC. Actually it's a general text macro system and is not markup language or a documentation generation system, which may be why it seems so strange if you're used to stuff like javadoc/doxygen/etc.
Re: Attributes on Enum Members: Call for use cases.
On Wednesday, 29 November 2017 at 16:45:04 UTC, Timon Gehr wrote: On 29.11.2017 17:21, Andrei Alexandrescu wrote: On 11/29/2017 07:53 AM, Seb wrote: UDAs for function arguments would be really awesome to have. They should be part of the same DIP. -- Andrei More generally, any declaration should ideally support UDAs. One issue with UDAs on function arguments is that function types will then have embedded UDAs, so the DIP should specify how that works. You have much more experience in how compilers work on the implementation level than me. What semantics are possible? It makes sense to me that UDAs on function arguments should only be for documentation/reflection purposes. Therefore: int fun(@Nonnull Object o); Is considered to be equivalent to: int fun(Object o); And similarly: import std.traits; Parameters!fun -> (Object), not (@Nonnull Object) However: int fun(@Nonnull Object o) { static if (is(typeof(fun) Params == __parameters)) { static foreach (p; Params) { pragma(msg, __traits(getUDAs, p)); //Should print Nonnull } } } Unfortunately I don't think we can have both as that would mean that UDAs would have to be part of the function signature. Is there a way around this?
Re: Attributes on Enum Members: Call for use cases.
On Tuesday, 28 November 2017 at 02:20:15 UTC, Michael V. Franklin wrote: On Sunday, 19 November 2017 at 13:35:13 UTC, Michael V. Franklin wrote: What's the official word? Does it require a DIP? For those who might want to know, Walter has informed me that this change will require a DIP. I already have two DIPs in the queue right now, so I wouldn't mind if someone else wrote it. But, absent any volunteers, I would welcome all of you to reply to this thread with some use cases where you might find UDAs or other attributes useful on enum members. Deprecation and serialization have already been mentioned, but it'd be impossible for me to imagine all the different ways users might find this feature useful. Thanks, Mike I'd be interested in working on a DIP like this Michael, but I also want to expand the scope to allowing UDAs on function arguments as well. We should have some solid use cases in mind; let's take this to private email.
Re: [OT] Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?
On Friday, 24 November 2017 at 20:29:23 UTC, codephantom wrote: On Friday, 24 November 2017 at 12:10:28 UTC, Nick Treleaven wrote: On Thursday, 23 November 2017 at 06:35:17 UTC, codephantom wrote: I love not being able to edit posts. It's so convenient. It's not as much of a problem as not being able to hide all posts by a user who repeats arguments, derails the conversation onto irrelevant side discussions and judges individuals instead of the idea they are conveying. So...you've just described your own post...you moron. Fuck you. This is going too far. This mailing list is for civil discourse.
Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?
On Tuesday, 21 November 2017 at 09:12:25 UTC, Ola Fosheim Grostad wrote: On Tuesday, 21 November 2017 at 06:03:33 UTC, Meta wrote: I'm not clear on whether he means that Java's type system is unsound, or that the type checking algorithm is unsound. From what I can tell, he's asserting the former but describing the latter. He claims that type systems with existential rules, hierarchical relations between types and null can potentially be unsound. His complaint is that if Java had been correctly implemented to the letter of the spec then this issue could have led to heap corruption if exploited by a malicious programmer. Runtime checks are part of the type system though, so it isn't unsound as implemented as generated JVM does runtime type checks upon assignment. AFAIK the complaint assumes that information from generic constraints isn't kept on a separate level. It is a worst case analysis of the spec... I don't quite understand the logic here, because it seems to be backwards reasoning. Constrain is a valid type because null inhabits it? That doesn't make sense to me. He also cites the "implicit constraint" that X extends U where X is ? super T, but X does not meet that constraint (Constrainextends U>) so how can the type checker deduce that X extends U?
Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?
On Tuesday, 21 November 2017 at 01:03:36 UTC, Mark wrote: On Monday, 20 November 2017 at 22:56:44 UTC, Walter Bright wrote: On 11/20/2017 3:27 AM, Timon Gehr wrote: On 20.11.2017 11:07, Atila Neves wrote: The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system. In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.) I'm curious. Can you expand on this, please? (In D, casting null to any other pointer type is marked as @unsafe.) This blog post seems to summarize the paper he linked to: https://dev.to/rosstate/java-is-unsound-the-industry-perspective And, like clockwork, the very first post is someone complaining that he insulted Javascript with an offhand example with a thread going 10 posts deep. I'm not clear on whether he means that Java's type system is unsound, or that the type checking algorithm is unsound. From what I can tell, he's asserting the former but describing the latter.
Re: "body" keyword is unnecessary
On Sunday, 19 November 2017 at 21:14:58 UTC, Jonathan M Davis wrote: It would have been better to explain in the documentation that body was being phased out rather than just removing it right when the changes were made to dmd. It's already caused problems due to folks trying to use do and it not working with the compiler that they're using (e.g. ldc). https://stackoverflow.com/questions/46860573/do-ldc-and-gdc-support-d-language-contracts - Jonathan M Davis Yeah, you're right. I never use GDC/LDC so I didn't consider them.
Re: "body" keyword is unnecessary
On Sunday, 19 November 2017 at 12:54:37 UTC, Basile B. wrote: Good question, it's even not in the changelog: https://www.google.fr/search?domains=dlang.org&dcr=0&biw=1280&bih=635&tbs=qdr%3Ay&ei=H34RWpKDPIzTgAatnqK4DA&q=body+do+site%3Adlang.org%2Fchangelog&oq=body+do+site%3Adlang.org%2Fchangelog&gs_l=psy-ab.3...4014.4428.0.4779.3.3.0.0.0.0.67.190.3.3.00...1.1.64.psy-ab..0.0.00.AOIgJDEhh_g So maybe it's wort mentioning something like "(formerly body, which is still allowed during )", because there's been a communication problem with that deprecation. Yes, I just checked and it's nowhere to be found in the changelog. Walter created the PR to implement the `do` syntax but he did not make a corresponding doc change. I'm not sure what to do considering 2.077.0 has already released.
Re: "body" keyword is unnecessary
On Sunday, 19 November 2017 at 12:54:37 UTC, Basile B. wrote: Yeah, "no worries" but for example a few weeks ago a bug report has drawn my attention: https://issues.dlang.org/show_bug.cgi?id=17925 After testing some code with i've indeed observed that the transition period for `do` had started... "since when ?" i've wondered. Good question, it's even not in the changelog: https://www.google.fr/search?domains=dlang.org&dcr=0&biw=1280&bih=635&tbs=qdr%3Ay&ei=H34RWpKDPIzTgAatnqK4DA&q=body+do+site%3Adlang.org%2Fchangelog&oq=body+do+site%3Adlang.org%2Fchangelog&gs_l=psy-ab.3...4014.4428.0.4779.3.3.0.0.0.0.67.190.3.3.00...1.1.64.psy-ab..0.0.00.AOIgJDEhh_g So maybe it's wort mentioning something like "(formerly body, which is still allowed during )", because there's been a communication problem with that deprecation. Yes, I'm pretty sure I created the PR to remove all references to `body` myself. It's part of the process; the first step is removing it from the documentation, because outright deprecation is too sudden. It's still perfectly usable, but we don't want to advertise it anymore.
Re: The delang is using merge instead of rebase/squash
On Friday, 24 March 2017 at 16:34:46 UTC, Martin Nowak wrote: On Tuesday, 21 March 2017 at 20:16:00 UTC, Atila Neves wrote: git rebase master my_branch git checkout master git merge --no-ff my_branch Yes, that's about what we aim for, rebase w/ --autosquash though, so that people can `git commit --fixup` new fixup commits to open PRs w/o leaving noise behind. https://github.com/dlang-bots/dlang-bot/issues/64 Requires a local checkout of the repo which the bot doesn't have atm. Did we come to any consensus on this? I ran into a dilemma with https://github.com/dlang/phobos/pull/5577 where I added a couple fixup commits, and now I don't want to merge until somebody rebases it because the history will be polluted with those extra commits. Also, looking at the PRs linked in this thread, I see that they're still open so AFAICT there is no clear solution.
Re: DMD PR management hits a new low
On Saturday, 18 November 2017 at 13:08:55 UTC, Dmitry Olshansky wrote: On Saturday, 18 November 2017 at 07:52:43 UTC, Michael V. Franklin wrote: I'll just refer you to this comment: https://github.com/dlang/dmd/pull/6947#issuecomment-345423103 Manually merging this pull as it sat around long enough waiting to be marked approved that it accumulated github's max 1000 status updates per commit id and won't ever see more until a new commit becomes current for it. Let that sink in... over 1000 builds done for a single pull request before it got marked for merging. It’s time for us to understand that letting PRs rot in the open and uncertain state is even worse then outrightly rejecting controversial work. It damages reputation, deters future contributions and clutters the queue. I’d suggest to put on a grim reaper’s robe and cutdown things that are not attended. If we were too eager to close, no worries - just create a new PR. Agreed. I tried my hand at this awhile ago with the Phobos queue and went on a mini closing/merging/pinging spree, but was very conservative at the risk of stepping on toes and thus was only able to get the number of open PRs down to 90 (and now it's back up over 110 again). I've noticed, however, that while Phobos/Druntime get most of the attention, dmd is really languishing. The are currently 182(!) open pull requests for dmd, with over half (99) being open for 1 year or more. The situation is getting out of control. https://github.com/dlang/dmd/pulls?utf8=✓&q=is%3Apr%20is%3Aopen%20created%3A<2016-11-18 Can we get some resources allocated to this, please? What can I do? Review stuff mostly. To close a ton of PRs we’d need executive decision. Thanks, Mike
Re: "body" keyword is unnecessary
On Saturday, 18 November 2017 at 16:21:30 UTC, Eljay wrote: On Monday, 28 March 2011 at 18:59:03 UTC, Walter Bright wrote: On 3/27/2011 10:35 PM, Jonathan M Davis wrote: I'll be _very_ excited to have both the destructor issues and the const issues sorted out. They are some of the more annoying quality of implementation issues at the moment. Yes, I agree those are the top priority at the moment, now that we have the 64 bit compiler online and the worst of the optlink issues resolved. NECRO ALERT... But I just saw that https://github.com/dlang/DIPs/blob/master/DIPs/DIP1003.md was addressed with https://github.com/dlang/dmd/pull/6855 . I, for one, will miss 'body' the keyword. Now I'll have to update all my toy code. (Just kidding, I don't mind updating my toy code. At least it isn't a codebase the size of Photoshop!) Don't worry, you've got a few years yet. Currently `body`is not even deprecated; it's become a conditional keyword, or you can use `do` in its place.
Re: Should aliasing a lambda expression be allowed?
On Thursday, 16 November 2017 at 16:10:50 UTC, Meta wrote: int function(int) f1 = (int n) => n; int function(int) f2 = (char c) => c; Should be int function(char)
Re: Should aliasing a lambda expression be allowed?
On Thursday, 16 November 2017 at 13:05:51 UTC, Petar Kirov [ZombineDev] wrote: On Wednesday, 15 November 2017 at 19:29:29 UTC, Steven Schveighoffer wrote: On 11/15/17 11:59 AM, Andrea Fontana wrote: On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven Schveighoffer wrote: alias foo = lambda1; alias foo = lambda2; What? Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Steve I guess you guys haven't been keeping up with language changes :P https://dlang.org/changelog/2.070.0.html#alias-funclit And yes, you can use 'alias' to capture overload sets. See also: https://github.com/dlang/dmd/pull/1660/files https://github.com/dlang/dmd/pull/2125/files#diff-51d0a1ca6214e6a916212fcbf93d7e40 https://github.com/dlang/dmd/pull/2417/files https://github.com/dlang/dmd/pull/4826/files https://github.com/dlang/dmd/pull/5162/files https://github.com/dlang/dmd/pull/5202 https://github.com/dlang/phobos/pull/5818/files Yes, as far as I understand this is just the normal way that you add a symbol to an existing overload set, except now it also interacts with the functionality of using an alias to create a named function literal. Kind of interesting because I don't think it was possible to do this before, e.g.: int function(int) f1 = (int n) => n; int function(int) f2 = (char c) => c; Would obviously be rejected by the compiler. However, using the alias syntax we can create an overload set from function literals in addition to regular functions.
Re: Project Elvis
On Tuesday, 7 November 2017 at 13:43:20 UTC, user1234 wrote: On Monday, 6 November 2017 at 20:14:17 UTC, Meta wrote: [...] import std.stdio; writeln(safeDeref(tree).right.right.val.orElse(-1)); writeln(safeDeref(tree).left.right.left.right.orElse(null)); writeln(safeDeref(tree).left.right.left.right.val.orElse(-1)); vs. writeln(tree?. right?.right?.val ?: -1); writeln(tree?.left?.right?.left?.right); writeln(tree?.left?.right?.left?.right?.val ?: -1); The functionality is probably a good idea, but a library solution is doable today without any acrobatics. Show me a library solution that works fine with IDE completion (so for the safe navigation operator, not the Elvis one). Yes, this is unfortunately the one sticking point of a library solution, although if the front end becomes fully usable as a library it may be possible to an extent.
Re: Project Elvis
On Monday, 6 November 2017 at 19:55:13 UTC, Jacob Carlborg wrote: On 2017-11-06 20:40, Dmitry Olshansky wrote: I’d argue this NOT what we want. Nullability is best captured in the typesystem even if in the form of Nullable!T. Yeah, it would be better if the elvis operator good integrate with a nullable/option type as well in addition to null. What's the point when we can already do it easily in a library, and arguably with better ergonomics? (http://forum.dlang.org/post/fshlmahxfaeqtwjbj...@forum.dlang.org) auto tree = new Node(1, new Node(2), new Node(3, null, new Node(4) ) ); import std.stdio; writeln(safeDeref(tree).right.right.val.orElse(-1)); writeln(safeDeref(tree).left.right.left.right.orElse(null)); writeln(safeDeref(tree).left.right.left.right.val.orElse(-1)); vs. writeln(tree?. right?.right?.val ?: -1); writeln(tree?.left?.right?.left?.right); writeln(tree?.left?.right?.left?.right?.val ?: -1); The functionality is probably a good idea, but a library solution is doable today without any acrobatics.
Re: Improve "Improve Contract Syntax" DIP 1009
On Wednesday, 1 November 2017 at 22:04:10 UTC, Andrei Alexandrescu wrote: We're having difficulty reviewing https://github.com/dlang/DIPs/blob/master/DIPs/DIP1009.md. The value is there, but the informal and sometimes flowery prose affects the document negatively. There are some unsupported claims and detailed description is sketchy. We need a careful pass that replaces the unclear or imprecise statements with clear, straightforward scientific claims. Can anyone help with this? For example, the first paragraph: "D has already made a significant commitment to the theory of Contract Programming, by means of its existing in, out, and invariant constructs. But limitations remain to their usability, both in their syntax and in their implementation. This DIP addresses only the syntax aspect of those limitations, proposing a syntax which makes in, out, and invariant contracts much easier to read and write." could be: "The D language supports Contract Programming by means of its in, out, and invariant constructs. Their current syntactic form is unnecessarily verbose. This DIP proposes improvements to the contract syntax that makes them easier to read and write." The change: * eliminates the entire "implementation sucks" allegation which seems taken straight from a forum flamewar; * replaces adjective-laden language with simple and precise statements; * provides a brief factual overview of what follows. Who wants to help? Andrei This actually makes the DIP slightly longer but hopefully makes it more clear. https://github.com/dlang/DIPs/pull/95 I'm heading off to bed so I won't be able to respond right away to suggested changes.
Re: My two cents
On Friday, 20 October 2017 at 00:26:19 UTC, bauss wrote: On Wednesday, 18 October 2017 at 08:56:21 UTC, Satoshi wrote: conditional dereferencing and stuff about that (same as in C#) foo?.bar; foo?[bar]; return foo ?? null; Tbh. these are some I really wish were in D, because it becomes tedious having to write something like this all the time: return foo ? foo : null; where return foo ?? null; would be so much easier. It especially becomes painful when you have something with multiple member accesses. Like: return foo ? foo.bar ? foo.bar.baz ? foo.bar.baz.something : null; Which could just be: return foo?.bar?.baz?.something; async/await (vibe.d is nice but useless in comparison to C# or js async/await idiom) I want to create function returning Promise/Task and await where I want to. e.g. auto result = device.start(foo, bar); // This is RPC to remote server returning Task!Bar // do some important stuff return await result; // wait for RPC finish, then return it's result I don't think this is much necessary, because the fiber implementations already are able to let you write code close to this. The only difference is you have to import the modules, but it's such a small thing I don't think you really need this. implement this thing from C# (just because it's cool) new Foo() { property1 = 42, property2 = "bar" }; Thanks for your time. - Satoshi I really wish this was implemented for classes too! Currently it exist for structs and it completely baffles me why it has never been implemented for structs. http://forum.dlang.org/post/mailman.2562.1403196857.2907.digitalmar...@puremagic.com From that thread: Here's a slightly improved version that collapses nested wrappers into a single wrapper, so that Maybe!(Maybe!(Maybe!...Maybe!T)...) == Maybe!T: /** * A safe-dereferencing wrapper resembling a Maybe monad. * * If the wrapped object is null, any further member dereferences will simply * return a wrapper around the .init value of the member's type. Since non-null * member dereferences will also return a wrapped value, any null value in the * middle of a chain of nested dereferences will simply cause the final result * to default to the .init value of the final member's type. */ template SafeDeref(T) { static if (is(T U == SafeDeref!V, V)) { // Merge SafeDeref!(SafeDeref!X) into just SafeDeref!X. alias SafeDeref = U; } else { struct SafeDeref { T t; // Make the wrapper as transparent as possible. alias t this; // This is the magic that makes it all work. auto opDispatch(string field)() if (is(typeof(__traits(getMember, t, field { alias Memb = typeof(__traits(getMember, t, field)); // If T is comparable with null, then we do a null check. // Otherwise, we just dereference the member since it's // guaranteed to be safe of null dereferences. // // N.B.: we always return a wrapped type in case the return // type contains further nullable fields. static if (is(typeof(t is null))) { return safeDeref((t is null) ? Memb.init : __traits(getMember, t, field)); } else { return safeDeref(__traits(getMember, t, field)); } } } } } /** * Wraps an object in a safe dereferencing wrapper resembling a Maybe monad. * * If the object is null, then any further member dereferences will just return * a wrapper around the .init value of the wrapped type, instead of * dereferencing null. This applies recursively to any element in a chain of * dereferences. * * Params: t = data to wrap. * Returns: A wrapper around the given type, with "safe" member dereference * semantics. */ auto safeDeref(T)(T t) { return SafeDeref!T(t); } unittest { class Node { int val; Node left, right; this(int _val, Node _left=null, Node _right=null) { val = _val; left = _left; right = _right; } } auto tree = new Node(1, new Node(2), new Node(3,
Re: Why Physicists Still Use Fortran
On Tuesday, 17 October 2017 at 13:09:37 UTC, Steven Schveighoffer wrote: Ouch! I had an experience like that once. I worked at a company that bought a one-man show's company who had an impressive load-balancing software we wanted to incorporate in our system. About 1-2 years into him working at our company, one of our developers tested it using webbench (all testing had been done by this guy previously), and was getting terrible numbers. But his tests always showed really good numbers. Turns out he was "timing" his benchmarks by starting a separate thread, then sleeping for 1 second, and then measuring how many requests he handled in that "1 second". But of course, the system was super-loaded, so the sleep was going way longer than 1 second, and his numbers looked great! After we fixed it, the numbers looked horrific and matched webbench. When this was found out, we kind of moved away from that software, as we were moving our focus to hardware. I can't imagine how that must have felt, though. -Steve This is just plain negligence on upper management's part. I can't believe they got that far without doing due diligence to verify his results.
Re: Implicit Constructors
On Friday, 13 October 2017 at 14:28:43 UTC, Adam D. Ruppe wrote: On Friday, 13 October 2017 at 14:22:05 UTC, Meta wrote: It'd be nice if it did, because I believe it would enable the following: I don't think so, since the implicit construction would only work one level deep. So you can implicit construct the array, but not the individual variants in the array. You can get very close but I don't think this will work with inheritance. If that could be fixed then it should be a workable solution (I think): class VArray { Variant[] va; this(T...)(T ts) { foreach(t; ts) { va ~= Variant(t); } } } void test2(VArray ta...) { foreach (v; ta.va) { writeln(v.type); } } void main() { test2(1, "asdf", false); }
Re: Implicit Constructors
On Friday, 13 October 2017 at 13:19:24 UTC, Adam D. Ruppe wrote: On Friday, 13 October 2017 at 06:33:14 UTC, Jacob Carlborg wrote: Not sure what the purpose of the latter is. I think it is just so you can do (T)(T...) in a template and have it work across more types; unified construction syntax. Though why it doesn't work with structs is beyond me. It'd be nice if it did, because I believe it would enable the following: import std.stdio; import std.variant; void test(Variant[] va...) { foreach (v; va) { writeln(v.type); } } void main() { test(1, "asdf", false); //Currently doesn't compile }