Re: Why does nobody seem to think that `null` is a serious problem in D?
On Thursday, 22 November 2018 at 15:50:01 UTC, Stefan Koch wrote: I'd say the problem here is not just false positives, but false negatives! With emphasis on _incremental_ additions to the compiler for covering more and more positives without introducing any _false_ negatives whatsoever. Without loosing compilation performance. I recall Walter saying this is challenging to get right but a very interesting task. This would make D even more competitive against languages such as Rust.
Re: Making external types available to mixins
On Saturday, 17 November 2018 at 17:58:54 UTC, John Chapman wrote: The following code doesn't compile because the generated type name needs to be available inside the mixin's scope, whereas it's actually in another module. auto makeWith(string className, Args…)(auto ref Args args) { mixin("return makeWith!(I", className, "Factory)(args);"); // Fowarded to implementation of makeWith below } auto makeWith(T, Args…)(auto ref Args args) … // This is the implementation The idea is that users could type (for example) makeWith!`Calendar`(…) instead of the longer makeWith!ICalendarFactory(…). I tried mixing in an import statement with the help of std.traits.moduleName so that the I...Factory type would be available but again the compiler complains that it's undefined. Has anyone had a similar need and come up with a solution? you need to qualify the names: I use https://github.com/libmir/dcompute/blob/master/source/dcompute/driver/ocl/util.d#L77-L97 for this in dcompute (needs to be updated for the new style of mixin but you get the idea).
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Wednesday, November 21, 2018 3:24:06 PM MST Johan Engelen via Digitalmars-d-learn wrote: > On Wednesday, 21 November 2018 at 07:47:14 UTC, Jonathan M Davis > > wrote: > > IMHO, requiring something in the spec like "it must segfault > > when dereferencing null" as has been suggested before is > > probably not a good idea is really getting too specific > > (especially considering that some folks have argued that not > > all architectures segfault like x86 does), but ultimately, the > > question needs to be discussed with Walter. I did briefly > > discuss it with him at this last dconf, but I don't recall > > exactly what he had to say about the ldc optimization stuff. I > > _think_ that he was hoping that there was a way to tell the > > optimizer to just not do that kind of optimization, but I don't > > remember for sure. > > The issue is not specific to LDC at all. DMD also does > optimizations that assume that dereferencing [*] null is UB. The > example I gave is dead-code-elimination of a dead read of a > member variable inside a class method, which can only be done > either if the spec says that`a.foo()` is UB when `a` is null, or > if `this.a` is UB when `this` is null. > > [*] I notice you also use "dereference" for an execution machine > [**] reading from a memory address, instead of the language doing > a dereference (which may not necessarily mean a read from memory). > [**] intentional weird name for the CPU? Yes. We also have D code > running as webassembly... Skipping a dereference of null shouldn't be a problem as far as memory safety goes. The issue is if the compiler decides that UB allows it do to absolutely anything, and it rearranges the code in such a way that invalid memory is accessed. That cannot be allowed in @safe code in any D compiler. The code doesn't need to actually segfault, but it absolutely cannot access invalid memory even when optimized. Whether dmd's dead code elimination algorithm is able to make @safe code unsafe, I don't know. I'm not familiar with dmd's internals, and in general, while I have a basic understanding of the stuff at the various levels of a compiler, once the discussion gets to stuff like machine instructions and how the optimizer works, my understanding definitely isn't deep. After we discussed this issue with regards to ldc at dconf, I brought it up with Walter, and he didn't seem to think that dmd had such a problem, but I didn't think to raise that particular possibility either. It wouldn't surprise me if dmd also had issues in its optimizer that made @safe not @safe, and it wouldn't surprise me if it didn't. It's the sort of area where I'd expect that ldc's more aggressive optimizations to be much more likely to run into trouble, and it's more likely to do things that Walter isn't familiar with, but that doesn't mean that Walter didn't miss anything with dmd either. After all, he does seem to like the idea of allowing the optimizer to assume that assertions are true, and as far as I can tell based on discussions on that topic, he doesn't seem to have understood (or maybe just didn't agree) that if we did that, the optimizer can't be allowed to make that assumption if there's any possibility of the code not being memory safe if the assumption is wrong (at least not without violating the guarantees that @safe is supposed to provide). Since if the assumption turns out to be wrong (which is quite possible, even if it's not likely in well-tested code), then @safe would then violate memory safety. As I understand it, by definition, @safe code is supposed to not have undefined behavior in it, and certainly, if any compiler's optimizer takes undefined behavior as meaning that it can do whatever it wants at that point with no restrictions (which is what I gathered from our discussion at dconf), then I don't see how any D compiler's optimizer can be allowed to think that anything is UB in @safe code. That may be why Walter was updating various parts of the spec a while back to talk about compiler-defined as opposed to undefined, since there are certainly areas where the compiler can have leeway with what it does, but there are places (at least in @safe code), where there must be restrictions on what it can assume and do even when the implementation is given leeway, or @safe's memory safety guarantees won't actually be properly guaranteed. In any case, clearly this needs to be sorted out with Walter, and the D spec needs to be updated in whatever manner best fixes the problem. Null pointers / references need to be guaranteed to be @safe in @safe code. Whether that's going to require that the compiler insert additional null checks in at least some places, I don't know. I simply don't know enough about how things work with stuff like the optimizers, but it wouldn't surprise me if in at least some cases, the compiler is ultimately going to be forced to insert null checks. Certainly, at minimum, I think that it's quite clear that if a platform doesn't
Re: Making external types available to mixins
On Sunday, 18 November 2018 at 11:29:51 UTC, John Chapman wrote: On Saturday, 17 November 2018 at 21:11:38 UTC, Adam D. Ruppe wrote: On Saturday, 17 November 2018 at 17:58:54 UTC, John Chapman wrote: Has anyone had a similar need and come up with a solution? You might be able to just pass it the Calendar type, and then fetch its parent module and get the ICalendarFactory from there (assuming they are defined in the same module). But generally speaking, passing strings to a mixin that refer to something in another module isn't going to work well thanks to scoping rules. You are better off passing a symbol of some sort. So there is no actual Calendar type. There's an ICalendarFactory type that creates instances of ICalendar (these types are part of a third-party API). "Calendar" is just a key users could use when calling a "makeWith" method that would build the ICalendar/Factory names, instantiate the factory, call the appropriate factory method and return the result. There are thousands of such object/factory pairs in the API. Just trying to cut out a lot of boilerplate code, but it doesn't seem doable this way. Cheers, So I had a go at this and I have a working solution. https://run.dlang.io/is/oaH6Ib At first, I tried to do everything in the mixin, as you can see with the `failedAttempt` function. The idea was that this should have worked like `mixin(failedAttempt!"Calendar"(1, 2, 3));`. As you can see, and the name suggests, I wasn't able to make it work with `args`. The solution I have to your problem is to use a template, in this case the `theType` template that will expand to the fully qualified name. So you'd use it like `makeWith!(theType!"Calendar")(args);` Hope it helps! Edi
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Thu, 22 Nov 2018 15:50:01 +, Stefan Koch wrote: > I'd say the problem here is not just false positives, but false > negatives! False negatives are a small problem. The compiler fails to catch some errors some of the time, and that's not surprising. False positives are highly vexing because it means the compiler rejects valid code, and that sometimes requires ugly circumlocutions to make it work.
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Thursday, 22 November 2018 at 15:38:18 UTC, Per Nordlöw wrote: The natural way forward for D is to add static analysis in the compiler that tracks use of possibly uninitialized classes (and perhaps also pointers). This has been discussed many times on the forums. The important thing with such an extra warning is to incrementally add it without triggering any false positives. Otherwise programmers aren't gonna use it. I'd say the problem here is not just false positives, but false negatives!
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Monday, 19 November 2018 at 21:23:31 UTC, Jordi Gutiérrez Hermoso wrote: When I was first playing with D, I managed to create a segfault by doing `SomeClass c;` and then trying do something with the object I thought I had default-created, by analogy with C++ syntax. Seasoned D programmers will recognise that I did nothing of the sort and instead created c is null and my program ended up dereferencing a null pointer. I'm not the only one who has done this. I can't find it right now, but I've seen at least one person open a bug report because they misunderstood this as a bug in dmd. I have been told a couple of times that this isn't something that needs to be patched in the language, but I don't understand. It seems like a very easy way to generate a segfault (and not a NullPointerException or whatever). What's the reasoning for allowing this? The natural way forward for D is to add static analysis in the compiler that tracks use of possibly uninitialized classes (and perhaps also pointers). This has been discussed many times on the forums. The important thing with such an extra warning is to incrementally add it without triggering any false positives. Otherwise programmers aren't gonna use it.
Re: Why does nobody seem to think that `null` is a serious problem in D?
On 11/20/18 6:14 PM, Johan Engelen wrote: On Tuesday, 20 November 2018 at 19:11:46 UTC, Steven Schveighoffer wrote: On 11/20/18 1:04 PM, Johan Engelen wrote: D does not make dereferencing on class objects explicit, which makes it harder to see where the dereference is happening. Again, the terms are confusing. You just said the dereference happens at a.foo(), right? I would consider the dereference to happen when the object's data is used. i.e. when you read or write what the pointer points at. But `a.foo()` is already using the object's data: it is accessing a function of the object and calling it. Whether it is a virtual function, or a final function, that shouldn't matter. There are different ways of implementing class function calls, but here often people seem to pin things down to one specific way. I feel I stand alone in the D community in treating the language in this abstract sense (like C and C++ do, other languages I don't know). It's similar to that people think that local variables and the function return address are put on a stack; even though that is just an implementation detail that is free to be changed (and does often change: local variables are regularly _not_ stored on the stack [*]). Optimization isn't allowed to change behavior of a program, yet already simple dead-code-elimination would when null dereference is not treated as UB or when it is not guarded by a null check. Here is an example of code that also does what you call a "dereference" (read object data member): ``` class A { int i; final void foo() { int a = i; // no crash with -O } } void main() { A a; a.foo(); // dereference happens } ``` I get what you are saying. But in terms of memory safety *both results* are safe. The one where the code is eliminated is safe, and the one where the segfault happens is safe. This is a tricky area, because D depends on a hardware feature for language correctness. In other words, it's perfectly possible for a null read or write to not result in a segfault, which would make D's allowance of dereferencing a null object without checking for null actually unsafe (now it's just another dangling pointer). In terms of language semantics, I don't know what the right answer is. If we want to say that if an optimizer changes program behavior, the code must be UB, then this would have to be UB. But I would prefer saying something like -- if a segfault occurs and the program continues, the system is in UB-land, but otherwise, it's fine. If this means an optimized program runs and a non-optimized one crashes, then that's what it means. I'd be OK with that result. It's like Schrodinger's segfault! I don't know what it means in terms of compiler assumptions, so that's where my ignorance will likely get me in trouble :) These discussions are hard to do on a mailinglist, so I'll stop here. Until next time at DConf, I suppose... ;-) Maybe that is a good time to discuss for learning how things work. But clearly people would like to at least have a say here. I still feel like using the hardware to deal with null access is OK, and a hard-crash is the best result for something that clearly would be UB otherwise. -Steve
Re: D vs perl6
On Thursday, 22 November 2018 at 09:03:19 UTC, Gary Willoughby wrote: On Monday, 19 November 2018 at 06:46:55 UTC, dangbinghoo wrote: So, can you experts give a more comprehensive compare with perl6 and D? Sure! 1). You can actually read and understand D code. ^_^, yeah, you're right. D code is much simpler, but it's said that perl6 is a new design with great modeling power. But what I think about is that: both D and perl6 have a very long compiler development time, and both have few users. after reading the perl6 official document, I finally found that D has most of what the perl6 says it's modern powerful features, and then D compiles native code, we can write simple and clean code, and runs fast.
Re: Wrapping a Dub project inside Meson
On Thursday, 22 November 2018 at 02:46:03 UTC, Adnan wrote: Does anyone have experience with using meson to wrap around a dub project? I have a typical dub project, meaning I have a dub dependency but I want to use meson for two reasons: 1. I want to distribute the application in form of a snap package (https://snapcraft.io/). Snapcraft does not recognize dub as a build tool afaik. 2. dub lacks very basic options like installing an app in users' PATH (https://github.com/dlang/dub/issues/839). The issue seems to be stalled for a while, which is quite a shame. What I want to do is basically when I write `ninja` it should invoke `dub build`. I would like to have a "dub plugin" for snapcraft (and other packaging systems). Adding "install" to dub sounds it's like re-inventing the wheel to me. Andrea
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Wednesday, 21 November 2018 at 23:27:25 UTC, Alex wrote: Nice! Didn't know that... But the language is a foreign one for me. Nevertheless, from what I saw: Shouldn't it be var x: C? as an optional kind, because otherwise, I can't assign a nil to the instance, which I can do to a class instance in D... and if it is, it works in the same manner as C#, (tried this out! :) ) This is true. But then the difference is that you can't* call a method on an optional variable without first unwrapping it (which is enforced at compile time as well). * You can force unwrap it and then you'd get a segfault if it there was nothing inside the optional. But most times if you see someone force unwrapping an optional it's a code smell in swift. Comparing non-optional types from swift with classes in D is... yeah... hmm... evil ;) Hehe, maybe in a way. Was just trying to show that compilers can fix the null reference "problem" at compile time. And that flow analysis can detect initialization. And if you assume a kind which cannot be nil, then you are again with structs here... But I wondered about something different: Even if the compiler would check the existence of an assignment, the runtime information cannot be deduced, if I understand this correctly. And if so, it cannot be checked at compile time, if something is or is not null. Right? Aye. But depending on how a language is designed this problem - if you think it is one - can be dealt with. It's why swift has optionals built in to the language.
Re: How do you debug @safe @nogc code? Can't figure out how to print.
On Sunday, 18 November 2018 at 11:00:26 UTC, aliak wrote: On Saturday, 17 November 2018 at 21:56:23 UTC, Neia Neutuladh wrote: On Sat, 17 Nov 2018 21:16:13 +, aliak wrote: [...] I meant something like: void debugln(T...)(T args) @nogc { import std.stdio; debug(MyProject) writeln(args); } You use that function instead of writeln in your @nogc-compatible templates: void callFunc(alias func)() { debugln("about to call function!"); func(); debugln("done calling function!"); } Then I can write: @nogc: void foo() { printf("hello world\n"); } void main() { callFunc!foo(); } Aha! I misunderstood what you meant. Yes that's actually simpler that what I was doing :D Thanks! Alternatively, you can use the core.stc libraries, which do not depend on the GC, for debugging. https://dlang.org/phobos/core_stdc_stdio.html#.printf
Re: D vs perl6
On Thursday, 22 November 2018 at 09:03:19 UTC, Gary Willoughby wrote: On Monday, 19 November 2018 at 06:46:55 UTC, dangbinghoo wrote: So, can you experts give a more comprehensive compare with perl6 and D? Sure! 1). You can actually read and understand D code. Also, D can be parsed. See: Perl Cannot Be Parsed: A Formal Proof ( https://www.perlmonks.org/?node_id=663393 )
Re: D vs perl6
On Thursday, 22 November 2018 at 09:03:19 UTC, Gary Willoughby wrote: On Monday, 19 November 2018 at 06:46:55 UTC, dangbinghoo wrote: So, can you experts give a more comprehensive compare with perl6 and D? Sure! 1). You can actually read and understand D code. Made my day. Thank you :)
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Wednesday, 21 November 2018 at 22:24:06 UTC, Johan Engelen wrote: The issue is not specific to LDC at all. DMD also does optimizations that assume that dereferencing [*] null is UB. Do you have an example? I think it treats null dereference as implementation defined but otherwise safe.
Re: Why does nobody seem to think that `null` is a serious problem in D?
On Wednesday, 21 November 2018 at 17:00:29 UTC, Alex wrote: This was not my point. I wonder, whether the case, where the compiler can't figure out the initialization state of an object is so hard to construct. ´´´ import std.experimental.all; class C { size_t dummy; final void baz() { if(this is null) { writeln(42); } else { writeln(dummy); } } } void main() { C c; if(uniform01 < 0.5) { c = new C(); c.dummy = unpredictableSeed; } else { c = null; } c.baz; writeln(c is null); } ´´´ C# wouldn't reject the case above, would it? As `c` is initialized in both branches, compiler knows it's always in initialized state after the if statement.
Re: D vs perl6
On Monday, 19 November 2018 at 06:46:55 UTC, dangbinghoo wrote: So, can you experts give a more comprehensive compare with perl6 and D? Sure! 1). You can actually read and understand D code.