Re: Poll for D Game Dev
(This is my second attempt, the forums were down the first time… :-( ) On Wednesday, 4 January 2023 at 02:54:51 UTC, Hipreme wrote: Hello D Game Developers. As you guys know, I have been developing a cross platform game engine for some years right now. I'm close to release version 1.0. That's nice, I believe the best way to get attraction for a game framework is to create a demo-game that is visually interesting and showcase it. Screenshots and "glitter" sells... 1: Would you be interested in participating in a D game jam? No time at this point. 2: Why did you started using D for developing games? I viewed C++ as expensive to develop in (this was pre C++10) and was looking for a cheaper alternative (e.g. less work). 3: What frameworks, libraries or game engines are you using for D? Are you developing your own? I created my own (no longer pursuing it). 3.1: What do you like more about the framework you're using? 3.2: What do you dislike about the framework you're using? I don't really want a full framework, I would be attracted to a set of low-overhead libraries that sits right on top of Metal and Vulcan, with some abstractions that allows me to write my own shaders. And high quality tutorials. Focused up-to-date demo-tutorials are very important for getting people interested in frameworks. Outdated tutorials often linger around after the framework has moved on, which is a source for frustration that it is better to avoid. 4: What the D ecosystem is missing for you to develop your own game? At the time, the GC turned out to be unsuitable which reduced the usefulness of this approach for me. 5: How much do you care about the game engine being betterC compatible? And why? I don't care how it is achieved in the framework, but in order to be interested I would need to be able to control memory allocations with ease. So not "betterC" per se. 6: Which kind of game do you plan to develop? 2D or 3D? Which platform are you targeting? 3D or 2.5, shaders. I probably would require iOS, and all desktops. WebGL2/WebAssembly would be a strong selling point. I don't know if it is possible to develop games for ChromeOS, but one way to generate some initial interest is to support a platform that others don't and market it in forums geared towards that platform. In essence you can choose between designing a framework that makes very good use of a small number of platforms (with stiff competition) or make a more abstract framework that is highly portable (but less fancy). 7: Are you looking to sell your game or just toying with the D language ( not going to make any serious project )? Why? If I were to make a full game that isn't a web-only game then there would have to be a solid source for revenue. I once created a prototype in [Metaplace](https://en.wikipedia.org/wiki/Metaplace), which was a hosted multi-user platform where you could write your own games (using 2D, with basic physics engine and scripting). It allowed you to create sub-games/sub-worlds. Something like that could be interesting, but creating a hosted multi-user solution is also very challenging. I guess it is possible to create a single user game that is structured in the same way. Basically let individual contributors create subgames within a larger game. Something like that could be interesting, but not sure how you can ensure quality. You might need to use the same graphical/sound artists for all subworlds, but then you need solid funding…
Re: Godbolt now shows optimization pipeline steps for LDC
On Thursday, 21 July 2022 at 17:58:17 UTC, Johan wrote: Godbolt now shows optimization pipeline steps for LDC, giving great insight into the LLVM optimization process, including Thanks for pointing this out, this was fun!
Re: Blog post on extending attribute inference to more functions
On Tuesday, 19 July 2022 at 09:55:52 UTC, Guillaume Piolat wrote: I put tags in comments, to text search later. Usually: ``` // TODO: actually blocks a release ``` Yes, this form is also recognized by some editors, might even compile a todo-list in the IDE interface for you (with no setup).
Re: The D Programming Language Vision Document
On Wednesday, 6 July 2022 at 21:30:44 UTC, Dukc wrote: And I think there is still pretty much value in handling UTF-16 strings because that's what many other languages use. With the current vision, Phobos V2 won't handle UTF16 in place. We'll have to convert it to UTF8 before manipulation, which is probably not optimal. Oh, there is no doubt that handling UTF16 should be possible, but it can be done just as well, if not better, as a support library. But it is very much undesirable to have more than a single string format for library authors to deal with.
Re: The D Programming Language Vision Document
On Tuesday, 5 July 2022 at 11:49:20 UTC, ryuukk_ wrote: I am sad that no word on the Allocator API, moving forward i personally think APIs that use memory should be required to ask for an Allocator and do their allocation using it, and only it A default GCAllocator could be used if none provided, this allows users of all kind to enjoy the APIs without having to complain about the GC or their inability to integrate the APIs in their game engine for example It should not be resolved like this. Functions that does not return memory should just be nogc. Functions that return allocated memory that are nogc should use RAII and prevent GC pointers from pointing to it. So you need a new type system. Or just overload on @system. You also want to get rid of destructors on GC objects and replace it with a finalizer that isnt sensitive to order.
Re: The D Programming Language Vision Document
On Monday, 4 July 2022 at 16:12:35 UTC, rikki cattermole wrote: https://www.unicode.org/Public/14.0.0/ucd/NormalizationTest.txt Argh, linking to large files... My implementation passes this :3 It should be complete test cases. Well, you also have to test for the cases that should not trigger any change, and also for sequencing/parsing bugs. So, not complete, but a good start.
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 21:06:40 UTC, rikki cattermole wrote: We have a perfectly good Unicode handling library already. (Okay, little out of date and doesn't handle Turkic stuff, but fixable). The standard one is called ICU. Yes, that is a common one that is maintained, but maybe there are BOOST licensed implementations too? One can do an exhaustive test for say two-character normalization against ICU to see if they are compliant. Anyway, normalization should not happen behind your back in a system level language. You might want to treat different encodings of the same string differently when comparing. Anyway, we are straying from my original point, that limiting ourselves to the string alias and not supporting wstring or dstring in Phobos is going to bite us. I guess some Windows programmers want 16 bit… but I don't think the conversion matters all that much in that context? There better be a good reason for this that isn't just removing templates. The good reason would be that you can focus on fast SIMD optimized algoritms that makes sense for the byte-encoding of UTF-8, and get something competitive.
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 20:28:18 UTC, rikki cattermole wrote: We only support UTF-16/UTF-32 for the target endian. Text input comes from many sources, stdin, files and say the windowing system are three common sources that do not make any such guarantees. Well, then the application author will use an external Unicode library anyway. If you support UTF-16 or UTF-32 there might not be a BOM mark, so you might need to use heuristics to figure out the LE/LB endian issue. For things like gzip, png, crypto and unicode there are most likely faster and better tested open source alternatives than a small community can come up with. Maybe just use out whatever Chromium or Clang uses? What I never liked about C++ is the string mess: char, signed char, unsigned char, char8_t, char16_t, char32_t, wchar_t, string, wstring, u8string, u16string, u32string, pmr::string, pmr::wstring, pmr::u8string, pmr::u16string, pmr::u32string… And this doesn't even account for endianess!! This is what happens over time as new needs pops up. One of the best things about Python3 and JavaScript is that there is one commonly used string type that is well supported. Having one common string representation is a good thing for API authors. (But make sure to have a maintained binding to a versatile C unicode library.)
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 19:32:56 UTC, rikki cattermole wrote: It is required for string equivalent comparisons (which is what you should be doing in a LOT more cases! Anything user provided when compared should be normalized first. Well, I think it is reasonable for a protocol to require that the input is NFC, and just check it and reject it or call out to an external library to convert it into NFC. Anyway, UTF-8 is the only format that isn't affected by network byte order… So if you support more than UTF-8 then you have to support UTF-8, UTF16-LE, UTF16-BE, UTF-32LE, UTF-32BE… That is five formats for just a simple string… and only UTF-8 will be well tested by users. :-/
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 18:33:29 UTC, rikki cattermole wrote: On 04/07/2022 6:10 AM, Ola Fosheim Grøstad wrote: People who are willing to use 4 bytes per code point are probably using third party C-libraries that have their own representation, so you have to convert anyway? If you use Unicode and follow their recommendations, you are going to be using dstrings at some point. I hardly ever use anything outside UTF-8, and if I do then I use a well tested unicode library as it has to be correct and up to date to be useful. The utility of going beyond UTF-8 seems to be limited: https://en.wikipedia.org/wiki/UTF-32#Analysis
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 17:27:43 UTC, rikki cattermole wrote: That's going to bite us big time when it comes to Unicode handling which wants to work with dstring's. You can just use ints… It is better to do something commonly used well, than have features that not enough people use to get the quality up. People who are willing to use 4 bytes per code point are probably using third party C-libraries that have their own representation, so you have to convert anyway?
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 11:39:45 UTC, Mike Parker wrote: Language evolution doesn't really mean much until we get all of this sorted. Good point. That's right. But Walter wants to minimize its use in Phobos v2, and there's a strong desire to have a pay-as-you-go DRuntime. I'm not the person to speculate on how the GC fits into that, but I do know they don't yet want git rid of it. Yes, the Phobos issue is probably a good point, but I don't think the standard library prevents experienced developers from doing anything (People can write their own or use third party solutions). Although I have been pro GC-free in the past, I am also not so sure if GC-free is the sweet spot in 2022 (due to Rust and C++ having reduced friction significantly). To my mind the sweet spot for larger applications would be to write your own runtime/GUI-framework/libraries in @system and cover your @safe application code with a convenient «non-stop» (or at least only «local stop») GC/ARC solution. But I understand that you cannot say anything specific on this at this point in time. (On a related note, I'll soon be publishing a video of a conversation I had with Walter about origins of D, and he said something there about the GC that really surprised me.) That would be interesting to hear more about as the GC was what surprised me the most when I first tried D as a C++ «descendant».
Re: The D Programming Language Vision Document
On Sunday, 3 July 2022 at 08:46:31 UTC, Mike Parker wrote: Feedback is welcome. Thank you for putting this in clear terms. I miss an overarching «primary use scenarios» to guide further language evolution. How do you know if new language features are good or bad if you have no scenarios to measure them up against? It is nice to see that improved move semantics is a goal, then I guess ARC could be something one could envision down the line. That said, I am a bit disappointed that there is no hint of a departure from the current STOP-the-world GC regime, but I guess that is the reflecting reality. My interpretation of the vision document is that the core team sees no need to change the current GC strategy.
Re: DIP1000: Memory Safety in a Modern System Programming Language Pt.1
On Thursday, 23 June 2022 at 06:52:48 UTC, Ola Fosheim Grøstad wrote: On Thursday, 23 June 2022 at 06:36:23 UTC, Ola Fosheim Grøstad wrote: Track the object instead and don’t change the type of the pointer to scope. I guess this is flow typing too, but it is less intrusive to say that the object is either of type «scope» or type «heap» and that regular pointers can hold both than to change the concrete pointer type. Specified concrete types should not change. For people interested in getting more intuition for flow typing: https://www.typescriptlang.org/docs/handbook/2/narrowing.html or chapter 3: https://whiley.org/pdfs/GettingStartedWithWhiley.pdf
Re: DIP1000: Memory Safety in a Modern System Programming Language Pt.1
On Thursday, 23 June 2022 at 06:36:23 UTC, Ola Fosheim Grøstad wrote: Track the object instead and don’t change the type of the pointer to scope. I guess this is flow typing too, but it is less intrusive to say that the object is either of type «scope» or type «heap» and that regular pointers can hold both than to change the concrete pointer type. Specified concrete types should not change.
Re: DIP1000: Memory Safety in a Modern System Programming Language Pt.1
On Thursday, 23 June 2022 at 00:45:09 UTC, Steven Schveighoffer wrote: I think this is the better option. Either that, or that when it returns `p` that trumps any possible `scope` inference. Imagine you have a function like this: ```d int foo() { int x = 0; x = long.max; x = 2; return x; } ``` Now today, this causes an error on the assignment to `long.max`, because obviously `x` is an int. But what if, instead, the compiler decides to backtrack and say "actually, if I make x a `long`, then it works!", and *now*, at the end, says "Oh, actually, you can't return a long as an int, what were you thinking?!" This is the equivalent here, you declare something *without* scope, assign it to something that is *not* scope, and then because sometime later you assigned it to something that *is* scope, it goes back and rewrites the declaration as if you did make it scope, and then complains to you that the magic trick it tried is not valid. This is going to be one of the most confusing features of DIP1000. It is confusing because it introduces flow typing without having flow typing. So this is messing up the user’s mental model of the type system, this is a basic usability flaw. Track the object instead and don’t change the type of the pointer to scope. If D wants to do flow typing, do it properly and make it clear to the user. It would be a good feature to have, but it would become D3.
Re: DIP1000: Memory Safety in a Modern System Programming Language Pt.1
On Wednesday, 22 June 2022 at 21:20:33 UTC, Steven Schveighoffer wrote: Full flow analysis will be defeatable by more complex situations: ```d int *p = null; if(alwaysEvaluateToFalse()) p = &arg; else p = new int(5); return p; ``` That would take a lot of effort just to prove it shouldn't be scope. I guess this is the wrong forum, but two quick points. Some C programmers reuse variables extensively, those programmers will be confused or annoyed. The analysis can be done after an optimization pass, so at least the simple cases go through smoothly.
Re: DIP1000: Memory Safety in a Modern System Programming Language Pt.1
On Wednesday, 22 June 2022 at 20:48:13 UTC, Steven Schveighoffer wrote: The part about `scope` being shallow. This is a problem. One thing that will be confusing to most users is that it appears to be using "taint" rather than proper flow analysis on the pointed-to-object? ```d int* test(int arg1, int arg2) { int* p = null; p = &arg1; p = new int(5); return p; // complains about p being scope } ```
Re: DIP1000: Memory Safety in a Modern System Programming Language Pt.1
On Wednesday, 22 June 2022 at 19:09:28 UTC, Dukc wrote: On Tuesday, 21 June 2022 at 15:05:46 UTC, Mike Parker wrote: The blog: https://dlang.org/blog/2022/06/21/dip1000-memory-safety-in-a-modern-system-programming-language-pt-1/ Now on 26. place at Hacker News. This was a nice presentation, if there will be a follow up then maybe create examples with a main and a button for «run this» that will show it in run.dlang.org? I suspect some readers will think TLDR when faced with longer blog posts, and just look at the examples (hence the show-don't-tell principle).
Re: Adding Modules to C in 10 Lines of Code
On Monday, 6 June 2022 at 11:23:44 UTC, Daniel N wrote: So Object-C can import C, but *C* cannot import *C*. Objective-C is a proper superset of C AFAIK.
Re: Adding Modules to C in 10 Lines of Code
On Monday, 6 June 2022 at 11:02:32 UTC, Ola Fosheim Grøstad wrote: Yes, Objective-C has added modules to C since forever… Just rename your .c file to .m I guess that would be the first. Or maybe not… you still use .h, so it depends on the implementation. Pointless discussion really.
Re: Adding Modules to C in 10 Lines of Code
On Monday, 6 June 2022 at 05:49:55 UTC, Paulo Pinto wrote: https://clang.llvm.org/docs/Modules.html And I am out of this thread. Yes, Objective-C has added modules to C since forever… Just rename your .c file to .m I guess that would be the first.
Re: Adding Modules to C in 10 Lines of Code
On Monday, 6 June 2022 at 01:05:38 UTC, zjh wrote: On Monday, 6 June 2022 at 00:19:16 UTC, zjh wrote: Because it's fun to be first! Yes, `'d'` is always independent. [C++'s moudle](https://www.oschina.net/news/198583/c-plus-plus-23-to-introduce-module-support) `D`, hurry up and get nervous. C++ has had modules for a while, but only Microsoft has a fully compliant implementation: https://en.cppreference.com/w/cpp/language/modules https://en.cppreference.com/w/cpp/compiler_support/20 Give it a year to be fully usable across compilers.
Re: Release: serverino - please destroy it.
On Tuesday, 10 May 2022 at 19:24:25 UTC, Andrea Fontana wrote: Maybe bambinetto is more about immaturity. Bambinuccio is cute. Bambinaccio is bad. Bambinone is big (an adult that behave like a child). -ello doesn't sound good with bambino, but it's very similar to -etto. Good luck :) Thanks for the explanation! <3 If only programming languages were this expressive! «Servinuccio»… ;P
Re: Release: serverino - please destroy it.
On Tuesday, 10 May 2022 at 16:05:11 UTC, Andrea Fontana wrote: Oh, italian is full of suffixes. -ello means a slightly different thing. It's small but sounds like a bit pejorative. Oh, and I loved the sound of it… suggests immaturity, perhaps? (I love the -ello and -ella endings. «Bambinella» is one of my favourite words, turns out it is a fruit too!)
Re: Release: serverino - please destroy it.
On Tuesday, 10 May 2022 at 15:27:48 UTC, Andrea Fontana wrote: Indeed the "-ino" suffix in "serverino" stands for "small" in italian. :) Bambino > bambinello? So, the embedded-version could be «serverinello»? :O)
Re: Release: serverino - please destroy it.
On Tuesday, 10 May 2022 at 15:00:06 UTC, Andrea Fontana wrote: I work in the R&D and every single time I even have to write a small api or a simple html interface to control some strange machine I think "omg, I have to set nginx agaain". Good point, there are more application areas than regular websites. Embedded remote applications could be another application area where you want something simple with HTTPS (monitoring webcams, sensors, solar panels, supervising farming houses or whatever).
Re: Release: serverino - please destroy it.
On Tuesday, 10 May 2022 at 12:52:01 UTC, Andrea Fontana wrote: I'm running a whole website in D using fastcgi and we have no problem at all, it's blazing fast. But it's not so easy to setup as serverino :) Easy setup is probably the number one reason people land on a specific web-tech, so it is the best initial angle, I agree. (By version 3.x you know what the practical weak spots are and can rethink the bottom layer.)
Re: Release: serverino - please destroy it.
On Tuesday, 10 May 2022 at 10:49:06 UTC, Andrea Fontana wrote: And you can still handle 700k/views per hour with 20 workers! Requests tend to come in bursts from the same client, thanks to clunky javascript APIs and clutters of resources (and careless web developers). For a typical D user ease-of-use is probably more important at this point, though, so good luck with your project!
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Wednesday, 9 February 2022 at 17:48:29 UTC, Guillaume Piolat wrote: There is also the Nim "discard" statement. Just change the default to not allowing return values to be discarded. When you really want to, do: ``` cast(void) function_with_return_value(…) ``` Or something like that.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Wednesday, 9 February 2022 at 10:59:03 UTC, Dukc wrote: You're implying that your opinion is rational and apolitical, disagreeing with it is irrational politics. I am implying that there are many symptoms of people not being willing to champion the best possible design and instead have started to look for what they think is easy to get through. I see that in this DIP, in other DIPs and in comments about DIPs people are contemplating. The accumulated outcome of such political design processes are usually not great. I will later try to create a separate thread for this, as Paul does not want this topic in this thread.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 20:52:21 UTC, Paul Backus wrote: If you intended to direct your messages at "the community" in general, rather than at me specifically, you should have started a new thread. As is, with these messages buried several pages deep in a thread about a different topic, most members of "the community" are unlikely to ever even read them in the first place. Good point. I will reread Robert's post on strategy and think about this for a while and write a more visible post when I have something that captures both Robert's concerns and concerns related to system level programming. That said, I really wish you had talked more with C++ programmers who make use of modern C++ before writing the DIP. I am personally so used to adding ```[[nodiscard]]``` on all functions that it has become second nature. It is a very valuable feature in regards to refactoring I think, but also the most ill-conceived design in modern C++ (that I use). I might as well do a ```#define func [[nodiscard]]```.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 19:14:50 UTC, Paul Backus wrote: Let me rephrase: I do not understand why you feel the need to direct these messages at me, personally. I am sorry if you felt I was addressing you personally. That was not intended, maybe bad phrasing on my part. (I tend to send email when addressing people personally! :-) I am more trying to convey what I see has gone wrong in the "modern C++" design department to "the community" in some hope that D can do better. But right now it seems like "C++ did this" is treated like a validation, rather than a warning. "@mustUse" will literally be on every single function that I write that returns something… because it is almost always a bug to ignore a return value! In summary: This feature deserves a higher priority than library status. If you have ideas or concerns you wish to present to D's leadership, my advice is to either (a) write a DIP, or (b) get in touch with Mike Parker about attending one of the D Language Foundation's monthly meetings (see the bottom of [his latest meeting summary post][1] for details). Yes, I am considering a DIP on parametric-aliases at least, where I think C++ currently has an edge. Thanks for tipping me about those meetings, I didn't know one could apply for participation. That might be a possibility in the future, although I think I probably should try to find time to participate in those beer-meetings first to get a rough idea of what is achievable. No point in going to a formal meeting without knowing what the terrain is like. :-)
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 16:20:07 UTC, Paul Backus wrote: I did not reply (and do not intend to reply) to any of the numerous other statements you have made in your other replies to this thread, since they are statements about the design of the D language and the DIP process in general, and are not directly relevant to DIP 1038. Well, but it is relevant to the outcome. In C++ I find that the more are strive to write semantically beautiful code, the less visually beautiful it becomes. My modern C++ code is littered with ```[[nodiscard]]``` and other attributes. If a language that is equally capable allows me to write code that is both semantically beautiful and visually beautiful then that would offset some of the disadvantages with using a small language. I think many C++ programmers feel that way. Big opportunity that is up for grabs there.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 15:51:46 UTC, Paul Backus wrote: If you're still confused *after* you've read the documentation, feel free to come back and complain to me then. What I stated has nothing to do with documentation. I think the semantics are too important to be a "linter-feature". I also think C++ made a "mistake" by using an attribute for it.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 15:17:35 UTC, Paul Backus wrote: To be honest, though, I can see where he's coming from. When writing DIP 1038, I made a conscious effort to avoid using the term "non-`@nodiscard`", due to the double negative. With a positively-phrased name like `@mustUse`, that problem disappears. And while I am at it, let me commit heresy by proclaiming that this feature is so important that I think it should be the default and that programmers should instead specify that the result is "discardable". That would of course be a terrible-terrible-terrible-breaking-change, and would never fly in the current political climate. But in general: D would become more interesting as a language if we could muster the guts to be different.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 15:17:35 UTC, Paul Backus wrote: On Sunday, 6 February 2022 at 14:44:40 UTC, Ola Fosheim Grøstad wrote: On Sunday, 6 February 2022 at 13:33:53 UTC, Paul Backus wrote: @mustUse is a user-defined attribute, and the official style guide says that names of UDAs should be camelCased: It is kinda confusing to call it a user-defined attribute if it is recognized by the compiler. Compiler-recognized UDAs are an established feature of D. See [`core.attribute`][1] for more examples. I don't need those? So hence I don't care… This feature you are proposing with this DIP is a *very important one* in my view, and I would use it almost everywhere.
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 14:56:42 UTC, Paul Backus wrote: If you strongly prefer the lower-case version, you can always rename it in your own code: import core.attribute: mustuse = mustUse; This response is getting a bit longwinded, and I really want this feature, but… 1. *Process*: I apologise for having missed the original DIP feedback thread. I understand that this retrospective feedback might be annoying and is easy to dismiss, but please consider making this feature as attractive as possible and let us focus on improving the language proper rather than shifting everything to the standard library where the threshold is lower? 2. *Renaming key features*: The renaming looks reasonable to me at the first glance, but when I think about it some more I think this will lead to a mess and make code less readable if many developers start doing this. I don't think programmers should be allowed to rename it at all! Maybe there should be a popularity vote for the syntax? 3. *The politics of language improvements*: I don't think this should be a library type. I think this feature is too important for that. To me this smells of let's move the syntax to a library to avoid any discussion about breaking changes. Design considerations should not become political, we need to get rid of politics and focus on principled strategies that makes the whole eco system attractive to more developers (the ones we don't have). 4. *Mandatory eco system*: How does the compiler recognize it? I hope it is by intrinsics and not by "symbol-path". That seems to be an import omission in the DIP, unless I overlooked it. For instance, if some sub-group of D programmers want grow their own independent niche-runtime-libary-combo, are they then free to write their own standalone hierarchy? Or is the standard library now taking over the introduction of language features in order to downplay politics? 5. *Make syntax pretty*: It would actually be better to just have this as part of the formal syntax with a non-attribute without the "@". One thing that makes D source code hard on the eyes is the "@" noise. The current parser is a bit primitive, but you can often accept terms in the syntax without making them keywords with a little bit of careful planning. 6. *Strategic positioning*: And yes, C++ syntax is becoming ugly too, but this is also why **making D pretty should be a strategic concern**! Can we afford to add more visual noise? I think not…
Re: DIP 1038--"@mustUse" (formerly "@noDiscard")--Accepted
On Sunday, 6 February 2022 at 13:33:53 UTC, Paul Backus wrote: On Sunday, 6 February 2022 at 10:55:20 UTC, Daniel N wrote: Guess I'm way too late, I just find it very strange you settled on mixedCase, it's not used for anything else. (nothrow @nogc). I also don't agree with the motivation that @use is hard to search for because @ is an unusual symbol. @mustUse is a user-defined attribute, and the official style guide says that names of UDAs should be camelCased: It is kinda confusing to call it a user-defined attribute if it is recognized by the compiler. I dislike the camel case as well, and the name is less clear than "nodiscard" in my opinion.
Re: The DIID series (Do It In D)
On Friday, 28 January 2022 at 13:27:33 UTC, WebFreak001 wrote: If there are people that would get upset from removing it, it's something that shouldn't be removed. (as there are people who are still interested in the project and might still use it) I hear what you are saying, but maybe there is value in some of those abandoned projects for people who want to get ideas, so who am I to judge? I can only judge what I would look for… One thing I like about the micro-examples approach that p0nce has taken is that they can be automatically tested against the latest compiler, so that makes the whole "outdated" question objective. :-)
Re: The DIID series (Do It In D)
On Thursday, 27 January 2022 at 08:52:32 UTC, WebFreak001 wrote: the list is being maintained, feel free to open PRs to update links and remove old stuff. It is probably better that the current maintainers remove stuff, I think people would get upset if someone else started to wipe out projects that haven’t recieved updates in a year or that are just not ready for consumption.
Re: The DIID series (Do It In D)
On Wednesday, 26 January 2022 at 19:25:18 UTC, Guillaume Piolat wrote: On Wednesday, 26 January 2022 at 15:53:44 UTC, Ola Fosheim Grøstad wrote: Is this list out of date? https://github.com/dlang-community/awesome-d I think it's alright. It's somehow out of date with the game engines I guess. I just clicked through many of the Github repos (not the ones for games). Xomb had not been touched since 2013, Warp not since 2015… A mixed bag. Your list will become more useful, I think.
Re: The DIID series (Do It In D)
On Wednesday, 26 January 2022 at 13:14:49 UTC, Guillaume Piolat wrote: Precisely I opened this thread because it's hard to know about everything that exist in the D ecosystem. I expected tips for this or that library. Is this list out of date? https://github.com/dlang-community/awesome-d Anyway, the short examples you provide is a good format. Full tutorials can often be too time consuming…
Re: The DIID series (Do It In D)
On Tuesday, 25 January 2022 at 08:44:34 UTC, Guillaume Piolat wrote: I always read "How good really is X?" as "this is bad" and "How bad really is X?" as "this is good" Yes, I think that is pretty universal. Didn't feel anything was wrong with the title, but the fact that most examples used "arsd" gave me the impression that there was only one good library…
Re: Why I Like D
On Friday, 14 January 2022 at 18:54:26 UTC, Steven Schveighoffer wrote: You might as well say that C is unusable at a high level vs. javascript because you need to decide what type of number you want, is it int, float, long? OMG SO MANY CHOICES. Bad choice of example… C is close to unusable at a high level and C++ is remarkably unproductive if you only want to do high level stuff. But yes, the problem with D const isn't that there are many choices. The problem is that there is only one over-extended choice.
Re: Why I Like D
On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote: compiler). You can write functional-style code, and, thanks to metaprogramming, you can even use more obscure paradigms like declarative programming. No, you can't. You can do a little bit of weak declarative programming in C++ thanks to SFINAE. The D type system does not provide a capable solver. I can theoretically do everything in C++ that I do in D, for example, Only with the GC, and even then that claim is a stretch. Without the GC you loose features that C++ has. In C++, I'm guaranteed that there is no GC -- even when having a GC might actually help me achieve what I want. In order to You have access to several GCs in the C++ eco system. that are not compatible with the GC, etc.. Definitely NOT worth the effort for one-off shell script replacements. It takes 10x Never seen a scripting problem that cannot be handled well with Python, why would I not use Python for scripting? When you sacrifice system level programming aspect in order to make scripting more convenient, then you loose focus. And people who primarily want to do system level programming will not respond well to it. Hardly surprising. With D, I can work at the high level and solve my problem long before I even finish writing the same code in C++. This is great, but does not solve the other issues. And when I need to dig under the hood, D doesn't stop me -- it's perfectly fine with malloc/free and other such alternatives. Nobody are fine with malloc/free. Even in C++ that is considered bad form. This is why these fanboy-discussions never go anywhere. People make up arguments and pretend that they are reality. Well, it isn't. Rust and C++ are doing better than D in terms of adoption, and it isn't just marketing. It is related to actual design considerations and a willingness to adapt to the usage scenario. Rust has actually focused on runtime-free builds. They pay attention to demand. Despite Rust being "high level" and "normative" they pay attention to system level usage scenarios beyond those of browsers. I think this is why it is easier to belive in the future of Rust than many other alternatives. And I don't have a preference for Rust, at all.
Re: Why I Like D
On Thursday, 13 January 2022 at 21:32:15 UTC, Paul Backus wrote: As you correctly observe, D is a great language for programmers who want autonomy--far better than something like Java, Go, or Rust, which impose relatively strict top-down visions of how code ought to be written. I keep seeing people in forum threads claiming that Rust is not a system level language, but a high level language (that poses as system level). With the exception of exceptions (pun?) C++ pretty much is an add-on language. You can enable stuff you need. The default is rather limited. I personally always enable g++-extensions. And having to deal with exceptions when using the system library is a point of contention. It should have been an add-on for C++ to fulfil the system level vision. C is very much bare bone, but you have different compilers that "adds on" things you might need for particular niches. Which of course is also why the bit widths are platform dependent. By being bare bone C is to a large extent extended by add ons in terms of macros and assembly routines for specific platforms. This modular add-on aspect is essential for system level programming as the contexts are very different (hardware, OS, usage, correctness requirements etc). In hardcore system level programming the eco system actually isn't all that critical. Platform support is important. Cross platform is important. One singular domain specific framework might be important. But you will to a large extent end up writing your own libraries.
Re: Why I Like D
On Thursday, 13 January 2022 at 16:33:59 UTC, Paulo Pinto wrote: ARC, tracing GC, whatever, but make your mind otherwise other languages that know what they want to be get the spotlight in such vendors. Go has a concurrent collector, so I would assume it is reasonable well-behaving in regards to other system components (e.g. does not sporadically saturate the data-bus for a long time). Go's runtime also appears to be fairly limited, so it does not surprise me that people want to use it on micro controllers. We had some people in these forums who were interested in using D for embedded, but they seemed to give up as modifying the runtime was more work than it was worth for them. That is at least my interpretation of what they stated when they left. So well, D has not made a point of capturing embedded programmers in the past, and there are no plans for a strategic change in that regard AFAIK.
Re: Why I Like D
On Thursday, 13 January 2022 at 11:57:41 UTC, Araq wrote: But the time it takes depends on the number of threads it has to stop and the amount of live memory of your heap. If it took 4ms regardless of these factors it wouldn't be bad, but that's not how D's GC works... Sadly fast scanning is still bad, unless you are on an architecture where you can scan without touching the caches. If you burst through gigabytes of memory then you have a negative effect on real time threads that expect lookup tables to be in the caches. That means you need more headroom in real time threads, so you sacrifice the quality of work done by real time threads by saturating the memory data bus. It would be better to have a concurrent collector that slowly crawls or just take the predicable overhead of ARC that is distributed fairly even in time (unless you do something silly).
Re: Why I Like D
On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov wrote: TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that. "Time it" isn't really useful for someone starting on a project, as it is too late when you have something worth measuring. The reason for this is that it gets worse and worse as your application grows. Then you end up either giving up on the project or going through a very expensive and bug prone rewrite. There is no trivial upgrade path for code relying on the D GC. And quite frankly, 4 ms is not a realistic worse case scenario for the D GC. You have to wait for all threads to stop on the worst possible OS/old-budget-hardware/program state configuration. It is better to start with a solution that is known to scale well if you are writing highly interactive applications. For D that could be ARC.
Re: Why I Like D
On Wednesday, 12 January 2022 at 20:48:39 UTC, forkit wrote: Fear of GC is just a catch-all-phrase that serves no real purpose, and provides no real insight into what programmers are thinking. "Fear of GC" is just a recurring _excuse_ for not fixing the most outdated aspects of the language/compiler/runtime. I have no fear of GC, I've used GC languages since forever, but I would never want a GC in the context of system level or real time programming. I also don't want to deal with mixing mostly incompatible memory management schemes in an application dominated by system level programming. In this context a GC should be something local, e.g. you might want to use a GC for a specific graph or scripting language in your application. Do I want a GC/ARC for most of my high level programming? Hell yes! But not for system level programming, ever. (Walter has always positioned D as a system level language and it should be judged as such. Maybe D isn't a system level language, but then the vision should be changed accordingly.) It's all about autonomy and self-government (on the decision of whether to use GC or not, or when to use it, and when not to use it. Which essentially is the essence of system level programming. You adapt the language usage to the hardware/use context, not the other way around. You shouldn't be glued to nonsensical defaults that you have to disable. You should have access to building blocks that you can compose to suit the domain you are working with. A GC can be one such building block, and in fact, the C++ community does provide several GCs as building blocks, but there is no force feeding… Which is why C++ is viewed as a hard core system level language by everyone and D isn't. I don't believe people are attracted to D because it has GC. There are better languages, and better supported languages, with GC. Or more importantly; low latency GCs and a language designed for it! Also, the idea that 'GC' means you never have to think about memory management... is just a ridiculous statement.. I don't have to think much about memory management in Python, JavaScript or Go, but I would also never do anything close to system level programming in those languages. You can create very interesting interactive applications in JavaScript, but then you: 1. Rely on clever system level programming in a very heavy browser runtime. 2. Use an eco system for interactive applications that is designed around the specific performance characteristics of the javascript runtime. 3. Adapt the application design to the limitations of the browser platform. 4. Get to use a much better low latency GC. Point 1, 2 and 3 are not acceptable for a system level language… So that is a different situation. And D does not provide 4, so again, a different situation. Cheers!
Re: DMD now incorporates a disassembler
On Saturday, 8 January 2022 at 20:50:56 UTC, max haughton wrote: Most other compilers have been able to do this for years. Forever. I have never used a C compiler that doesn't output assembly on request. Pretty much a cultural requirement as C compilers used to pipe asm through a separate assembler.
Re: D Language Foundation Quarterly Meeting, October 2021
On Saturday, 6 November 2021 at 15:46:57 UTC, JN wrote: This is much less of a strength than you think. For 90% of cases, lack of metaprogramming is resolved by putting a Python script in build step that autogenerates the necessary code. Yes, I agree. For a single project metaprogramming has little impact. But such scripts are rarely reused between projects. Where metaprogramming has high potential is in creating more adaptive frameworks that can reused in many projects. It does requires high level of sophistication and insight (and experimentation) to build such frameworks though.
Re: D Language Foundation Monthly Meeting Summary
On Thursday, 10 June 2021 at 10:55:50 UTC, sighoya wrote: That doesn't mean tracing GC is bad, I'm still skeptical that arc + cycle detection is better than tracing in general for true high level languages. For truly high level languages garbage collection probably is the best, if you design the language semantics for it. The main issue with D is that the language semantics don't enable competitive GC advantages. I think at least D should go with ARC for shared resources. Then have a variety of options for task-local resources, including GC.
Re: D Language Foundation Monthly Meeting Summary
On Monday, 7 June 2021 at 18:37:54 UTC, sai wrote: My use case of writing GUI apps for desktop - presence of GC does not matter for me at all. In fact its great for me. Hopefully D will not stop covering these use cases. Great, I am interested in highly interactive apps (games, sound editors, graphics editors, audio plugins, etc). Maybe we could create a focus group and collect experiences, approaches, weak spots, strong spots? Right now I think many feel left in the dark when they come with an idea for an app as there is little guidance of how to build a bigger app. I sense this by watching the learn-forum.
Re: D Language Foundation Monthly Meeting Summary
On Saturday, 5 June 2021 at 09:47:11 UTC, Imperatorn wrote: I get your point, but I still think GC will remain mainly because of the area D is trying to cover. You mean the wait-for-everything-to-stop-scan-everything approach will remain? What is the area that D is trying to cover though? Somebody should write a memo on that.
Re: D Language Foundation Monthly Meeting Summary
On Saturday, 5 June 2021 at 08:58:47 UTC, Paulo Pinto wrote: Meanwhile kids, the future generation of developers, keeps adopting the hardware and programming languages listed above, D isn't useful for teaching kids programming. Wy too complicated. Most Arduino users, who build useful stuff, use C++. But it is not a good strategy for D to become more like C++, too late. It would've been a good strategy 6 years ago to align D semantics with C++ and have full interop, but too late now. D needs a feature-set that makes it attractive for people wanting to do high profile interactive stuff, like games, graphics editors, sound editors, high performance services. With useful optional GC and easy multithreading. The current GC strategy is a dead end. No GC makes the language too much of a C++ with no real edge. D needs to offer something other languages do not, to offset the cost of learning the language complexities.
Re: D Language Foundation Monthly Meeting Summary
On Friday, 4 June 2021 at 18:34:32 UTC, Imperatorn wrote: You might be surprised, but it's actually not up to you what topic fits or not. This is the announce forum, so it is kinda misplaced, but we are all contributing to this so... :) Obviously GC is good for some things and not good at all for other things. The problem is that the D-style GC is not a good fit for anything interactive, beyond simple applications. My impression is that most people use D for batch programs, so I guess that shapes the opinion. And that is a problem for D. A bad GC-strategy is reinforced by the remaining majority, which is a tiny fraction of the overall programming community. These days you don't really need a system level language to write batch programs. So it is not a good strategy to hold onto this specific type of stop-everything-scan-everything GC. Unless D decides to not be a system level language, but then you need a lot more convenience features and become more scripty. The inbetween position is not the top-pick for anyone looking for a solution. Not being willing to switch MM strategy means being stuck on a tiny island, too afraid of crossing the ocean to get access to the main land. My impression is that Walter would rather stay on this tiny island than take any chances. The language is being extended with incomplete experimental features, instead of going to the core of the issue and doing something with the foundation for the language. That is not going to end well. You'll end up with a patchwork.
Re: D Language Foundation Monthly Meeting Summary
On Friday, 4 June 2021 at 21:35:43 UTC, IGotD- wrote: D certainly has the power to do so but the question is if there is any will power in this community. Nothing has happened for almost 20 years. I guess importC will make changes even more unlikely. Absorbing C is nice, but it has the unfortunate effect of giving D some of the same disadvantages as C++.
Re: D Language Foundation Monthly Meeting Summary
On Friday, 4 June 2021 at 14:07:38 UTC, drug wrote: I use GC when developing an algorithm to solve my problem. After I has implemented the algorithm I can redesign it to avoid GC (if needed). It works pretty nice in my case at least. Because initially I concentrate on my domain problem and then I only deal with memory management. This separation is very helpful. Yes, if you select that strategy from the start. But think for a moment how much easier it would be if the language had ownership pointers. I also believe that careful usage of ownership pointers in combination with precise scanning could lead to much less memory being scanned. There are no language features in D that support GC-strategies. That's not a strength. They can be remedied, but it takes willpower.
Re: D Language Foundation Monthly Meeting Summary
On Friday, 4 June 2021 at 13:32:37 UTC, Ola Fosheim Grøstad wrote: On Friday, 4 June 2021 at 12:44:07 UTC, Imperatorn wrote: GC won't go away tho. What might happen is more flexibility. The GC-phobia is irrational. The topic doesn't fit in this thread, but it isn't irrational. The most irrational issue here is that the language itself prevents precise collection, and there is no willpower to change it. If you combine task-local GC with fully precise compiler-guided scanning, then you'd have something that would work.
Re: D Language Foundation Monthly Meeting Summary
On Friday, 4 June 2021 at 12:44:07 UTC, Imperatorn wrote: GC won't go away tho. What might happen is more flexibility. The GC-phobia is irrational. The topic doesn't fit in this thread, but it isn't irrational. You have to wait for all participating threads to be ready to collect, so it isn't only about collection speed. In essence you end up with some of the same issues as with cooperative multitasking. And it is also obvious that collection speed will drop as your application grows and you start working with larger datasets. So, you might initially think it is fine, but end up rewriting your codebase because it only worked well with the simple prototype you started with. That's not a good strategy. (but ok for batch programs)
Re: D Language Foundation Monthly Meeting Summary
On Friday, 4 June 2021 at 00:39:41 UTC, IGotD- wrote: On Friday, 4 June 2021 at 00:14:11 UTC, zjh wrote: Zim: the grammar is ugly. Zim? Is that what they speak in Zimbabwe? Zig.
Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"
On Monday, 19 April 2021 at 09:06:17 UTC, FeepingCreature wrote: Right, I agree with all of this. I just think the way to get to it is to first allow everything, and then in a second step pare it down to something that does what people need while also being monitorable. This is as "simple" as merging every IO call the program does into its state count. I am not saying it is wrong for a new language to try out this philosophy, assuming you are willing to go through a series of major revisions of the language. But I think for a language where "breaking changes" is a made a big deal of, you want to stay conservative. In that regard, I agree it would be social, as in, if you clearly state upfront that your new language will come in major versions with major breakage at regular intervals then you should have more freedom to explore and end up with something much better. Which is not necessarily a deal breaker if you also support stable versions with some clear time window, like "version 2 is supported until 2030". So, yeah, it is possible. But you have to teach the programmers of your philosophy and make them understand that some versions of the language has a long support window and other versions are more short-lived. Maybe make it clear in the versioning naming-scheme perhaps. D's problem is that there is only one stable version, and that is the most recent one... that makes changes more difficult. Also, there are not enough users to get significant experience with "experimental features". What works for C++ when providing experimental features, might not work for smaller languages. - or if not, if D can make it work. At any rate, with a lot of features like implicit conversions, I think people would find that they're harmless and highly useful if they'd just try them for a while. A lot of features are harmless on a small scale. Python is a very flexible language and you can do a lot of stuff in it that you should not do. Despite this it works very well on a small scale. However, for Python to work on a larger scale it takes a lot of discipline (social constraints in place of technical constraints) and carefully chosen libraries etc. The use context matters, when discussing what is acceptable and what isn't. As such, people might have different views on language features and there might be no right/wrong solution. Implicit conversions is good for custom ADTs, but the interaction with overloading can be problematic, so it takes a lot of foresight to get it right. A geometry library can benefit greatly from implicit conversions, but you can run into problems when mixing libraries that overuse implicit conversions... So, it isn't only a question of whether implicit conversions is a bad thing or not, but how the language limits "chains of conversion" and overloads and makes it easy to predict for the programmer when looking a piece of code.
Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"
On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote: This is a social issue more than a technical one. The framework can help, by limiting access to disk and URLs and allowing tracing and hijacking, but ultimately you have to rely on code to not do crazy things. I think the downsides are conceptual and technical, not social. If you can implement a version counter then you get all kinds of problems, like first compilation succeeding, then the second compilation failing with no code changes. Also, you can no longer cache intermediate representations between compilations without a rather significant additional machinery. It is better to do this in a more functional way, so you can generate a file, but it isn't written to disk, it is an abstract entity during compilation and is turned into something concrete after compilation. to find out what works and what doesn't, and you can't gather experience with what people actually want to do and how it works in practice if you lock things down from the start. In That's ok for a prototype, but not for a production language. Most of my annoyances with D are issues where D isn't willing to take an additional step even though it would be technically very feasible. No implicit conversion for user-defined types, no arbitrary IO calls in CTFE, no returning AST trees from CTFE functions that are automatically inserted to create macros, and of course the cumbersome specialcased metaprogramming for type inspection instead of just letting us pass a type object to a ctfe function and calling methods on it. So, anything that can be deduced from the input is fair game, but allowing arbitrary I/O is a completely different beast, compilation has to be idempotent. It should not be possible to write a program where the first compilation succeeds and the second compilation fails with no code changes between the compilation executions. Such failures should be limited to the build system so that you can quickly correct the problem. IMHO, a good productive language makes debugging easier, faster and less frequently needed. Anything that goes against that is a move in the wrong direction.
Re: Please Congratulate My New Assistant
On Tuesday, 26 January 2021 at 16:05:03 UTC, Paul Backus wrote: Well, the incorrect behavior is a liability whether we have an issue for it in bugzilla or not. The issue itself is an asset. IFF there is development process that ensure that old issues are being reconsidered for every release. Having a process where >3 year issues are being recorded in a document in a structured fashion probably would be a good idea. Then they could be used for planning. Without that they will most likely never be included in any kind of plan? It is just easier to ignore an "issue" that has been silently accepted for a decade than a recent one.
Re: Please Congratulate My New Assistant
On Monday, 25 January 2021 at 21:25:28 UTC, H. S. Teoh wrote: So don't look at the bug count as some kind of liability to rid ourselves of by whatever means possible; rather, look at it as a sign of life and the opportunity to grow. Depends on the nature of the bug, doesn't it? If the bug is related to the compiler rejecting too many programs, then it is ok. If the bug is related to accepting programs it cannot generate correct for, then that is a big issue...
Re: Please Congratulate My New Assistant
On Tuesday, 19 January 2021 at 09:33:26 UTC, Paolo Invernizzi wrote: On Tuesday, 19 January 2021 at 00:12:49 UTC, Max Haughton wrote: The second category is a bit looser, as there are some things I'd like to do that come under the community relations remit that aren't as structured - e.g. I am very interested in getting a proper working group together to try and iterate through designs properly rather than incremental DIPs. That would be great! +1
Re: styx, a programming languange written in D, is on the bootstrap path
On Friday, 15 January 2021 at 19:18:09 UTC, Basile B. wrote: I plan to use dparse for the most part, not only to convert but also to detect non bootstrapable code or missing features. Ah, smart. I've been thinking about using an existing d-parser to convert unit tests from D to my Dex syntax (experimental project). Modifying the compiler is fun, but writing unit tests is not... This is a noble reason. Styx has no such motivations. It is simpler than D1 for example and has no killer feature, What made D1 attractive to many C++ programmers was that it was stripped down. Also, many language designers get tempted to add many features that are hollow, then they regret it and rip it all out again (lots of wasted effort and source code). So, being very restrictive and patient is a good thing, I believe. The truly good ideas takes time to "grow" (in ones mind). just 3 or 4 creative things are - optional break/continue expression - explicit overloads - DotExpression aliases (they have been proposed to DMD this summer when I worked "under cover" as Nils.) - pointers to member function is very different from what I have seen so far (no fat pointer) "Nils" is a very scandinavian name? :-) It will be interesting to see what your codebase looks like after moving to self hosted. I assume you will keep us up to date. I finally decided to start on a lexer for it... How long did it take you to get where you are now? The project exists since several years (2017-04-13 20:05:51) but is only actively developed since july 2020. The game changers were: - to use LLVM instead of libfirm - to that some part of the initial design were bad - proper lvalue implementation But that is only 6 months? Then you have come quite far if you are already going for self hosting. I'm still rethinking my lexer. Hehe. Like, do I want to make keywords tokens or should they just be lexed as identifiers? I did the first, but think maybe the last is more flexible, so a rewrite... is coming. ;)
Re: styx, a programming languange written in D, is on the bootstrap path
On Thursday, 14 January 2021 at 17:51:51 UTC, Basile B. wrote: This is the last[1] occasion to speak about a programming language initiatly made in D, as the bootstrap phase is very near. Interesting project! How did you move from D to Styx? (I assume bootstrapping means a self hosting compiler?) Did you write some scripts to translate? I've found myself to sketch new programming languages whenever I hit things in existing languages that I find annoying over the past decade or so. I finally decided to start on a lexer for it... How long did it take you to get where you are now?
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Thursday, 7 January 2021 at 13:03:54 UTC, Luhrel wrote: I think that `int[$] a = [1, 2, 3]` is much more user-friendly. ``` auto a = [1,2,3].staticArray!ubyte ``` But what prevents you from writing your own library solution that works like this? auto ints = mkarray(1,2,3,4,5); auto floats = mkarray(1.0f,2,3,4,5); etc
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Wednesday, 6 January 2021 at 12:02:05 UTC, Basile B. wrote: No. I agree. Do you imagine if this conversation was in the offical DIP review. E.g those two a**holes who troll the review process /s LOL, people have their own frame of reference so the shorter the DIP the more interpretations you get. :-D
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Wednesday, 6 January 2021 at 11:39:08 UTC, Basile B. wrote: Hmm... My take is that this proposal is auto with a constraint, except it will also do implicit conversion. yeah the split of DIP feedbacks and DIP discussions was clearly not a bad thing ^^ Do you disagree?
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Wednesday, 6 January 2021 at 11:18:22 UTC, Basile B. wrote: I thought about auto when reading the DIP too, but auto is more used like a Type (although being a storage class ;) ). It's never used to infer a value, i.e an expression. While I understand what you mean this is unrelated. Dollar is very well suited. Hmm... My take is that this proposal is auto with a constraint, except it will also do implicit conversion.
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Wednesday, 6 January 2021 at 10:55:39 UTC, Ola Fosheim Grøstad wrote: int[_] = … or _[_] … To expand on this with more examples, you might want to constrain "auto" in various ways with pattern matching: // ensure static array of something with length 4 _[4] v = f(); // ensure that I get a MyContainer with some unspecified type elements MyContainer<_> c = g(); // define a function that swaps the content of an array of length 2 void swap(ref _[2] a){ … } // it would also be shorter than auto, but not sure if that is a good thing _ x = 3 // same as "auto x = 3"
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Wednesday, 6 January 2021 at 10:58:23 UTC, Basile B. wrote: '$' is not an ident char, that's why that works Yeah, but "$" means length in D. I think it would be valuable to have more generic constraints than the DIP suggests so that it can be useful in multiple contexts. Would appropriating "_" break a lot of code?
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
I am in favour of more controlled type inference in general, but perhaps this one is a bit specific. What if you defined "_" to mean "deduce whatever should be in this spot", not only for static arrays, but for all types? Then you could do: int[_] = … or _[_] … etc
Re: Printing shortest decimal form of floating point number with Mir
On Wednesday, 6 January 2021 at 06:50:34 UTC, Walter Bright wrote: As far as I can tell, the only algorithms that are incorrect with extended precision intermediate values are ones specifically designed to tease out the roundoff to the reduced precision. It becomes impossible to write good unit-tests for floating point if you don't know what the exact results should be. Anyway, it is ok if this is up to the compiler vendor if you can test a flag for it. Just get rid of implicit conversion for floating point. Nobody interested in numerics would want that.
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 21:46:34 UTC, Ola Fosheim Grøstad wrote: It is very useful to create a simple alias from a complex type for export from a type library, then it breaks when people use that type library to write templated functions. People do this all the time in C++. Example: // library code struct _config(T){} struct _matrix(T,C){} alias matrix(T) = _matrix!(T,_config!T); // application code void f(T)(matrix!T m){} void main() { f(matrix!float()); f(matrix!double()); }
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 21:43:09 UTC, welkam wrote: Replace alias Bar(T) = Foo!T; with alias Bar = Foo; struct Foo(T) {} alias Bar = Foo; void f(T)(Bar!T x) {} void main() { auto foo = Bar!int(); f(foo); } The example was a reduced case. One can trivially construct examples where that won't work. It is very useful to create a simple alias from a complex type for export from a type library, then it breaks when people use that type library to write templated functions. People do this all the time in C++.
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 21:03:40 UTC, welkam wrote: This code compiles struct bar(T) {} void f(T)(bar!T x) {} void main() { alias fooInt = bar!int; alias foo = bar; assert(is(fooInt == bar!int)); assert(is(foo!int == bar!int)); assert(is(fooInt == foo!int)); } This code has no relation to what we discuss in this thread…
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 18:48:06 UTC, ag0aep6g wrote: On Tuesday, 5 January 2021 at 18:06:32 UTC, Ola Fosheim Grøstad wrote: My main concern is that we need to attract more people with a strong comp.sci. background because as a language grow it becomes more tricky to improve and the most difficult topics are the ones that remain unresolved (like we see with @live, shared and GC). I don't have that background myself, so I don't think I can provide any insight here. Well, what I mean is that it is not so bad if D is perceived as an "enthusiast language", then you don't expect a flawless implementation. If the language spec outline something that is "beautiful" (also in a theoretical sense) and show where the implementation needs some love then people can contribute in areas they are interested in. If the spec is so-so, then it will be a revolving door... It probably would be a good idea to focus on one subsystem at a time. Refactor, document, make a list of priority improvements for that subsystem, and then improve/reimplement, document, then move on to the next subsystem. If memory management is in the center now, then that is great, but then maybe the next cycle could take another look at the type system as a whole. I'm afraid I don't have anything profound to contribute here either. I have no idea how to manage a group of volunteers (including Walter). Most people will shy away from the difficult, tedious or boring bits, so by keeping focus on one subsystem at a time, one could hope that the difficult/tedious/boring bits receive more attention... (Nothing specific for D, just human behaviour.)
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 17:13:01 UTC, ag0aep6g wrote: Sure. I've said in my first post in this thread that "issue 1807 is well worth fixing/implementing". Ok, if we have a majority for this, then all is good. A program has a bug when it doesn't behave as intended by its author. I think that's a pretty permissive definition of bug. So, DMD has a bug when it doesn't behave as Walter intended when he wrote or accepted the code. Ok, I can't argue if that is the definition. My main concern is that we need to attract more people with a strong comp.sci. background because as a language grow it becomes more tricky to improve and the most difficult topics are the ones that remain unresolved (like we see with @live, shared and GC). I agree that there are more important topics than streamlining parametric types. Like shared and memory management. But it is still important to have an idea of which areas are worth picking up, if someone comes along with an interest in writing solvers, then this could be something he/she could tinker with. should work. Walter has not come forward to say that he made a mistake in the implementation. Ok, but that is not important. What is important is that if someone comes along with an interest in this area, then we can encourage them to work on it. Done. Incremental improvements lead to a system that works pretty well a lot of the time. That's Walter's signature, isn't it? That happens in many compiler development cycles. Of course, D has also added a lot of features... perhaps at the expense of bringing what is to perfection. I don't disagree. But we have to work with what we got. The implementation exists. The spec doesn't. It probably would be a good idea to focus on one subsystem at a time. Refactor, document, make a list of priority improvements for that subsystem, and then improve/reimplement, document, then move on to the next subsystem. If memory management is in the center now, then that is great, but then maybe the next cycle could take another look at the type system as a whole.
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 15:04:34 UTC, welkam wrote: Also how "i'm like you" is an insult? I don't think I should reply to this…
Re: Printing shortest decimal form of floating point number with Mir
On Tuesday, 5 January 2021 at 13:30:50 UTC, Guillaume Piolat wrote: On Tuesday, 5 January 2021 at 09:47:41 UTC, Walter Bright wrote: The only D compiler that uses excess precision is DMD and only if -O flag is passed. The same example compiled with GDC uses write-read codes. LDC uses SSE codes. DMD still supports baseline 32 bit Windows that does not have XMM registers. It would be nice if no excess precision was ever used. Fun fact: in AIFF files the samplingrate is stored as a 80 bit IEEE Standard 754 floating point. ;)
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 17:48:50 UTC, ag0aep6g wrote: I think you're hitting the nail on the head here regarding the confusion. Such a rewrite makes intuitive sense, and it would be nice, but it doesn't happen. So, does that mean that you agree that having better unification would be a worthwhile item to have on a wish list for 2021? So, if somebody want to do a full implementation that performs well, then it would be an interesting option? Quite frankly, it is much better to just say "oh, this is a deficiency in the implementation" than to say that the language spec is fubar... Also, the whole idea of writing the language spec to match the implementation is not a good approach. I think D could become competitive if the existing feature set is streamlined and polished.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 22:55:28 UTC, Ola Fosheim Grøstad wrote: "BarInt", "Bar!int" and "Foo!int" are all names, or labels, if you wish. And they all refer to the same object: the nominal type. Which you can test easily by using "is(BarInt==Foo!int)". If the terminology is difficult, let' call them "signifiers". If D add type-functions, then another signifier for the same type could be "Combine(Foo,int)". It should not matter which signifier you use, if they all yield the exact same object (in the mathematical sense): the same nominal type "struct _ {}", then they should be interchangeable with no semantic impact. This is a very basic concept in PL design. If you name the same thing several ways (any way you like), then the effect should be the same if you swap one for another. It should be indistiguishable.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 22:14:12 UTC, welkam wrote: Anyway you want assign template name. Spoiler alert Bar!int is not a name. It's also not a type or even an object. You might used another term for how alias should work but I cant track them all. Its template instantiation. It is a name, e.g.: alias BarInt = Bar!int; "BarInt", "Bar!int" and "Foo!int" are all names, or labels, if you wish. And they all refer to the same object: the nominal type. Which you can test easily by using "is(BarInt==Foo!int)". When I got into personality types and typed myself I found out that my type doesnt respect physical world and details. Drop ad hominem. Argue the case.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 17:58:35 UTC, Ola Fosheim Grøstad wrote: On Monday, 4 January 2021 at 17:24:42 UTC, John Colvin wrote: in your opinion, this should compile and msg `int int`, yes? It does match: template Q(A : Foo!int) { pragma(msg, A.stringof); } So in then it should also match Foo!T, yes? Please also note that it is completely acceptable to put limits on the constraints you are allowed to use on matching in order to get good performance, but it should work for the constraints you do allow.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 17:24:42 UTC, John Colvin wrote: in your opinion, this should compile and msg `int int`, yes? It does match: template Q(A : Foo!int) { pragma(msg, A.stringof); } So in then it should also match Foo!T, yes?
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 15:53:44 UTC, Atila Neves wrote: I wasn't a process-oriented answer, nor do I think it should have been. The PR was a change to the compiler with an accompanying DIP. I'm a fan of giving an opinion early to save everyone a lot of work and bother. All management communication about conclusions have a process oriented aspect to them. Do you just want to quickly shut the door completely, or do you want to give people a feeling that their ideas will be remembered in the continuing process of improving the product? If you cannot grow that feeling, then the incentive to try will be reduce significantly...
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 15:25:13 UTC, Atila Neves wrote: On Tuesday, 29 December 2020 at 19:59:56 UTC, Ola Fosheim Grøstad wrote: 1. acknowledgment of the issue 2. acknowledgment of what the issue leads to in terms of inconvenience 3. a forward looking vision for future improvements Your two #1 points aren't the same - understanding/acknowledging the issue. I think I could have done more to acknowledge it now that you brought it up. In this case, maybe #1 and #2 are the same. But sometimes people will complain about the "inconvenience" and not drill it down to the real cause in terms of language-mechanics. A valid response could be "I will look and see if I can find the source of this problem, but I totally see the inconvenience you are experiencing. We will look at this more closely when planning for release X.Y.Z where we do an overhaul of subsystem Q.". I don't think a process oriented response has to be more concrete than that?
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 15:15:50 UTC, ag0aep6g wrote: As far as I understand, describing what DMD does as "unification" would be a stretch. You might have a point saying that DMD should do "plain regular unification" instead of the ad hoc, undocumented hacks it does right now. Unification is what you do with parametric types, even if it implemented in an ad hoc manner that turns out to not work... The funny thing is that this would have worked with regular macro expansion.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 15:03:05 UTC, jmh530 wrote: On Monday, 4 January 2021 at 14:40:31 UTC, ag0aep6g wrote: On 04.01.21 15:37, Ola Fosheim Grøstad wrote: On Monday, 4 January 2021 at 14:11:28 UTC, ag0aep6g wrote: `Bar!int` is an alias. It's indistinguishable from `Foo!int`. The code fails in the same manner when you replace "Bar!int" with "Foo!int". Wrong. This succeeds: struct Foo(T) {} alias Bar(T) = Foo!T; void f(T)(Foo!T x) {} void main() { f(Bar!int()); } You didn't replace "Bar!int" with "Foo!int". You replaced "Bar!T" with "Foo!T". That's something else entirely. IMO, this is a better example, even if it's a little more verbose. struct Foo(T) {} alias Bar(T) = Foo!T; void f(T)(Bar!T x) {} void main() { auto x = Bar!int(); f(x); } Also, the typesystem clearly sees the same type with two names, so there is no new nominal type (obviously): struct Foo(T) {} alias Bar(T) = Foo!T; static assert(is(Bar!int==Foo!int)); We are talking unification over complete types, unification over incomplete types would be more advanced... but this isn't that. We don't start unification until we have a concrete complete type to start working with.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 14:44:00 UTC, Ola Fosheim Grøstad wrote: On Monday, 4 January 2021 at 14:40:31 UTC, ag0aep6g wrote: You didn't replace "Bar!int" with "Foo!int". You replaced "Bar!T" with "Foo!T". That's something else entirely. No, it isn't. When it is instantiated you get "Bar!int" and then the unification would substitute that with "Foo!int". This is basic type system design. Nothing advanced. Just plain regular unification. This should even be worth discussing... the fact that it is being debated isn't promising for D's future... Also, keep in mind that the type isn't "Foo", that is also just a name! The true type would be a nominal "struct _ {}". If you through alias say that an object has two equivalent names, then the type system better behave accordingly.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 14:40:31 UTC, ag0aep6g wrote: You didn't replace "Bar!int" with "Foo!int". You replaced "Bar!T" with "Foo!T". That's something else entirely. No, it isn't. When it is instantiated you get "Bar!int" and then the unification would substitute that with "Foo!int". This is basic type system design. Nothing advanced. Just plain regular unification. This should even be worth discussing... the fact that it is being debated isn't promising for D's future...
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 14:11:28 UTC, ag0aep6g wrote: `Bar!int` is an alias. It's indistinguishable from `Foo!int`. The code fails in the same manner when you replace "Bar!int" with "Foo!int". Wrong. This succeeds: struct Foo(T) {} alias Bar(T) = Foo!T; void f(T)(Foo!T x) {} void main() { f(Bar!int()); }
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 13:47:17 UTC, Ola Fosheim Grøstad wrote: An alias is a short hand. If it is possible to discriminate by the alias and the actual object then that it a semantic problem. Typo: "discriminate between". An alias should be indistinguishable from the object, you are only naming something. You should be able to use whatever names you fancy without that having semantic implications, that's the core PL design principle. (The stupid example that didn't work out was just me forgetting that I had played around with in higher kinded template parameters in run.dlang.io, I thought it was the code above... forgot. :-)
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 12:35:12 UTC, John Colvin wrote: What's the simplest example that doesn't work and is that simple example just indirection through an alias or is it actually indirection through a template that *when instantiated* turns out to be just an alias? Indirection through a parametric alias. This is the simplest I have come up with so far: struct Foo(T) {} alias Bar(T) = Foo!T; void f(T)(Bar!T x) {} void main() { f(Bar!int()); } I created a thread for it: https://forum.dlang.org/post/nxrfrizqdmhzhivxp...@forum.dlang.org I have a suspicion that what you're asking for here is the type-inference to have x-ray vision in to uninstantiated templates that works for a few simple cases. Am I wrong? No, just substitute: "Bar!int" with "Foo!int". To be clear, a really useful special case can be really useful and worthwhile, but I'm not convinced this is the principled "type system bug" you are saying it is. Why are you not convinced? An alias is a short hand. If it is possible to discriminate by the alias and the actual object then that it a semantic problem.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 05:58:09 UTC, Walter Bright wrote: On 1/3/2021 8:37 PM, 9il wrote: I didn't believe it when I got a similar answer about IEEE floating-point numbers: D doesn't pertinent to be IEEE 754 compatible language and the extended precision bug is declared to be a language feature. The "extended precision bug" is how all x87 code works, C to C++ to Java. The reason is simple - to remove the problem requires all intermediate results to be written to memory and read back in, which is a terrible performance problem. Early Java implementations did this write/read, and were forced to change it. The advent of the XMM registers resolved this issue, and all the x86 D compilers now use XMM for 32 and 64 bit floating point math, when compiled for a CPU that has XMM registers. Extended precision only happens when the `real` 80 bit type is used, and that is IEEE conformant. But you still have to deal with things like ARM, so maybe the better option is to figure out what the differences are between various hardware and define "floating point conformance levels" that library can test for, including what SIMD instructions are available. For instance, the accuracy of functions like log/exp/sin/cos/arcsin/… can vary between implementations. It would be useful for libraries to know.