Re: My statements related to terminating my SAoC relationship
On Thursday, 18 October 2018 at 00:28:32 UTC, solidstate1991 wrote: I hope it's not some melatonin insensitivity, that would require some pretty harsh drugs. Have you given Cannabis a try? You don't have to smoke it, it can be vaporized too for example, and is really easy to grow yourself.
Re: Using a development branch of druntime+phobos with ldc
On Wednesday, 10 October 2018 at 08:29:52 UTC, Per Nordlöw wrote: but what about rebuilding druntime+phobos with ldc and linking with that specific libphobos.so when compiling my benchmarking app with ldc? Is it possible? If so, what's the preferred way? LDC has its own forks of druntime and Phobos, with numerous required adaptations. So you'd need to apply your patches to those forks & build the libs (druntime and Phobos are separate libs for LDC), e.g., with the included ldc-build-runtime tool, which makes this painless: https://wiki.dlang.org/Building_LDC_runtime_libraries The Wiki page also shows how to link those libs instead of the shipped-with ones.
Re: Move semantics, D vs. C++, ABI details
On Wednesday, 3 October 2018 at 20:57:39 UTC, kinke wrote: For DIP 1014, we (at least LDC) would most likely need to adopt the C++ ABI in this regard, i.e., always pass non-PODs by reference Which would also help with C++ interop of course - while LDC's extern(C++) ABI was fixed wrt. passing all non-PODs by reference, the different destruction rules are still an issue (no destruction when calling a C++ function from D, and double destruction when calling an extern(C++) D function from C++).
Move semantics, D vs. C++, ABI details
This is an attempt to clarify some of the recent confusion wrt. DIP 1014, Walter's statement that DMD wouldn't move structs etc. My understanding of the terminology: * D moving: copy bits to another memory location, skipping postblit for the moved-to object & skipping destruction of the moved-from object * C++ moving: moved-to object constructed via special constructor hijacking the moved-from object's data and resetting the moved-from object for safe destruction (no double-free etc.) More interesting are the following low-level ABI differences. No guarantees for completeness/absolute correctness from my side (but I've worked on LDC's ABI implementations). C++11: 1) Non-POD by-value arguments are passed by reference, low-level-wise, and never on the stack or in registers. 2) The argument/parameter is allocated on the caller's stack and destructed by the caller after the call. struct S { S(int) {} // ctor S(const S&) {} // copy ctor S(S&&) {} // move ctor ~S() {}// dtor }; void foo(S); void bar() { // 1) passing an rvalue foo(S(123)); /* => * S tmp(123); // construct temporary * foo(&tmp); // pass the temporary directly by ref, no move ctor involved * tmp.~S(); // destruct it after the call */ // 2) passing an lvalue S lval(456); foo(lval); /* => * S tmp(lval); // construct temporary via copy ctor * foo(&tmp); // pass the temporary directly by ref * tmp.~S();// destruct it after the call */ // 3) using std::move foo(std::move(lval)); /* => * S tmp(std::move(lval)); // construct temporary via move ctor (possibly mutating lval) * foo(&tmp); // pass the temporary by ref * tmp.~S(); // destruct it after the call */ } D: 1) Non-POD by-value arguments are passed by value, i.e., on the stack [or in registers]. 2) The callee destructs the parameter; the caller doesn't perform any cleanup/destruction after the call. struct S { this(int) {} // ctor this(this) {} // postblit ~this() {}// dtor } void foo(S); void bar() { // 1) passing an rvalue foo(S(123)); /* => * S tmp = S(123); // construct temporary * foo(tmp); // pass the temporary by value, i.e., move * // to foo params stack (copy bits, no postblit call) * // foo() will destruct its param (moved-to object) * // destruction of tmp is skipped (no need to reset to S.init) */ // 2) passing an lvalue S lval = S(456); foo(lval); /* => * S tmp = lval; // construct temporary by bitcopy + postblit call * foo(tmp); // pass the temporary by value, see rvalue case * // foo() will destruct its param & tmp's dtor is disabled (in the AST) */ } D's rvalue case (explicit temporary + move to params stack) is how LDC handles it to satisfy LLVM IR (before optimization); other compilers might emplace the argument directly in the parameters stack. For DIP 1014, we (at least LDC) would most likely need to adopt the C++ ABI in this regard, i.e., always pass non-PODs by reference - AFAIK, the LLVM IR doesn't provide the means to get the final address of the (moved-to) parameter in the callee's parameter stack, inside the caller's scope (required for the proposed postmove call), with whatever that entails (don't disable dtor for lvalue copies in AST, destruct the temporaries after the call in a finally block if the callee potentially throws etc.).
Re: Random thought: Alternative stuct
On Tuesday, 4 September 2018 at 04:03:19 UTC, Mike Franklin wrote: In my opinion, we shouldn't add a third option. Agreed. Rather, we should deprecate classes, and make and expand the capabilities of structs. Languages like Zig and Rust have done away with classes and all runtime overhead that accompanies them, and are showing great promise by expanding on structs with much more composable features. I'm not familiar with the mentioned languages, but I definitely wouldn't want to miss classes for good old polymorphism. I'm not too fond of the implicit monitor field, but otherwise I find the struct/class distinction in D more or less perfect.
Re: Using a C++ class in a D associative array
On Wednesday, 22 August 2018 at 19:25:40 UTC, Jacob Carlborg wrote: This could be solved, I think, with having "TypeInfo.getHash" a template taking the actual type and not void*. That template can then inspect if the passed type is a D class or any other type of class and act accordingly. It could be simpler (and slower ;)) by using `m_flags & ClassFlags.isCPPclass`.
Re: Using a C++ class in a D associative array
On Monday, 20 August 2018 at 22:16:09 UTC, Jacob Carlborg wrote: At the third line there's a call from object.TypeInfo_Class.getHash. I looked up to see what the "getHash" method is doing in druntime [2], the method looks like this: override size_t getHash(scope const void* p) @trusted const { auto o = *cast(Object*)p; return o ? o.toHash() : 0; } I guess the compiler uses the AA key type's TypeInfo, which is available for extern(C++) classes too. The TypeInfo_Class.getHash() then uses the dynamic type via virtual call, (wrongly) assuming it's always a D class. For an extern(C++) class, it will either call another virtual function (no inherited virtual functions from Object), what you were seeing, or attempt to call... something. ;) All this just compiled without any error or warnings. No runtime exceptions or asserts were triggered. I just got a really weird behavior. This is somewhat special due to the common TypeInfo.getHash() signature for all kinds of types, and so just taking a hairy void* pointer to the byte/real/static array/AA/object/… to be hashed. Polishing C++ interop with extern(C++) classes (matching ctors/dtors, mixed class hiearchies, ability to easily allocate/construct/destruct/free on the other language side etc.) has started with v2.081 and is still on-going; there are probably more places in druntime silently assuming a D class.
Re: skinny delegates
On Saturday, 4 August 2018 at 13:52:54 UTC, Steven Schveighoffer wrote: No, it depends on and is dictated by D's delegate system. The delegate receives the context pointer by value. Absolutely right, thx for clarifying.
Re: skinny delegates
Argh, should read `*cast(int*) &context`.
Re: skinny delegates
On Saturday, 4 August 2018 at 12:21:18 UTC, Steven Schveighoffer wrote: You don't even need to make a copy to show problems, the context isn't passed by reference: const r1 = dg(); const r2 = dg(); assert(r1 == 43 && r2 == 44); // would fail with optimization. -Steve This depends on the implementation; assuming that captured `x` represents the `*cast(int*) context` lvalue, this example would work.
Re: skinny delegates
A slightly more complex example, illustrating that it wouldn't be enough to check that the delegate body itself doesn't mutate the captured variable: ``` int delegate() increment; auto foo(int x) { increment = () => ++x; return () => x; } void main() { auto dg = foo(42); auto dg_copy = dg; assert(dg() == 42); assert(increment() == 43); assert(dg() == 43); assert(dg_copy() == 43); } ``` In the end, I think it really boils down to that the optimized state would be per-delegate (and tied to its lifetime) instead of shared (as we see above, even across lambdas) and GC-managed (and so can happily escape, see one of my earlier posts). So all use of it as lvalue (taking the address, assigning, passing by ref etc.) isn't allowed in the delegate body itself, and to make sure no other lambda mutates it, it needs to be const. But there are also GC-using delegates which could be optimized this way. This should read: lambdas in a non-@nogc parent function are optimization candidates too, and the lambda bodies can use the GC as well.
Re: skinny delegates
On Friday, 3 August 2018 at 16:46:53 UTC, Jonathan Marler wrote: Maybe you could provide an example or 2 to demonstrate why these would be requirements...we may have 2 different ideas on how this would be implemented. auto foo(/*mutable*/ int x) { return { return ++x; }; } void main() { auto dg = foo(42); auto dg_copy = dg; // with the optimization, dg_copy would have its own context // in the ptr field, based on the current state in dg (42) const r1 = dg(); const r2 = dg_copy(); // would be 43 with optimization assert(r1 == 43 && r2 == 44); } do you think it should always be on and the developer shouldn't need to or care to opt out of it? Yes, by enforcing it in the language. No knowledge about this optimization necessary, no extra syntax, no extra dependency. Also, what about the developers that want to guarantee that the optimization is occuring? If they do know about this optimization, they probably aren't noobs and IMO should be able to have a look at LLVM IR or assembly to check whether it is optimized. The only reason for wanting to enforce it coming to my mind ad-hoc is GC-free code (-betterC, bare metal), where @nogc should do. But there are also GC-using delegates which could be optimized this way.
Re: skinny delegates
On Friday, 3 August 2018 at 14:46:59 UTC, Jonathan Marler wrote: After thinking about it more I suppose it wouldn't be that complicated to implement. For delegate literals, you already need to gather a list of all the data you need to put on the heap, and if it can all fit inside a pointer, then you can just put it there instead. Nope, immutability (and no escaping) are additional requirements, as each delegate copy has its own context then, as opposed to a single shared GC closure. In the end, I think that most if not all use cases would be better off using the library solution if they want this optimization. I disagree.
Re: skinny delegates
On Thursday, 2 August 2018 at 21:28:27 UTC, kinke wrote: Leaking may be an issue. Ah, I guess that's why you mentioned the use-as-rvalue requirement.
Re: skinny delegates
Leaking may be an issue. This currently works: ``` static const(int)* global; auto foo(const int param) { return { global = ¶m; return param + 10; }; } void main() { { int arg = 42; auto dg = foo(42); auto r = dg(); assert(r == 52); } assert(*global == 42); } ``` `global` would be dangling as soon as the delegate `dg` goes out of scope.
Re: skinny delegates
On Thursday, 2 August 2018 at 15:12:10 UTC, Steven Schveighoffer wrote: On 8/2/18 11:00 AM, Kagamin wrote: I suppose it's mostly for mutability, so if it's const, it can be optimized based on type information only: auto foo(in int x) { return { return x + 10; }; } I'm not sure what you mean here. I think he's saying that the check for immutability could simply consist in checking that all captured variables (well, not too much room for a lot of them ;)) have a const type. It's definitely an interesting idea, and the obvious benefit over a library solution is that you wouldn't have to think about this optimization when writing a delegate; if the captured stuff happens to be const and fit into a pointer, the GC won't be bothered, nice.
Re: Is there any good reason why C++ namespaces are "closed" in D?
On Sunday, 29 July 2018 at 11:03:43 UTC, Jonathan M Davis wrote: I guess that the argument at that point is that you would have to put them in separate D modules, just like you would if they were extern(D) functions. Yep, that'd sound acceptable to me, implying that ``` extern(C++, cppns) { void foo(); } void foo(); ``` wouldn't work anymore, and particularly, this neither: ``` extern(C++, cppns) { extern(C++, nested) { void foo(); } void foo(); } ``` so that a straight C++ namespace => D module hierarchy mapping would probably be required in the general case: ``` // cppns/package.d module cppns; extern(C++, cppns) { void foo(); } // cppns/nested/package.d module cppns.nested; extern(C++, cppns) extern(C++, nested) { void foo(); } ```
Re: Is there any good reason why C++ namespaces are "closed" in D?
This limitation really seems to make no sense, especially since you can split up a C++ namespace across multiple D modules, just not inside a single module.
Re: "catch" not catching the correct exception
On Thursday, 26 July 2018 at 07:38:08 UTC, Shachar Shemesh wrote: Mecca doesn't call that. Should it? Can that be the problem? Very likely so. It's (normally) used in core.thread.Thread's push/popContext() when switching into a fiber.
Re: Struct Initialization syntax
On Monday, 23 July 2018 at 17:32:23 UTC, aliak wrote: Can we just consider that named struct init syntax *is* a generated named constructor? If named arguments choose a different syntax then you have no conflict. If they go with the same (i.e. option 2) then you have seamless consistency. +1. And hoping for the latter, seamless consistency.
Re: DIP 1016--ref T accepts r-values--Community Review Round 1
Thanks a lot, Manu, I'm a huge fan of this. Wrt. binding rvalues to mutable refs, we could introduce something like `-transition=rval_to_mutable_ref` to have the compiler list all matching call sites. Wrt. `auto ref`, I'd very much like to see its semantics change to 'pass this argument in the most efficient way' (depending on type and ABI, not just lvalue-ness of argument) at some point in the future.
Re: DMD, Vibe.d, and Dub
On Tuesday, 17 July 2018 at 19:39:32 UTC, Russel Winder wrote: It seems that the LDC 1.11 branch in the GitHub repository has the DMD 2.081.0 problem. If you're referring to branch merge-2.081, that one doesn't exist anymore. master/beta2 are based on 2.081.1+ and should thus be fixed.
Re: DMD, Vibe.d, and Dub
On Tuesday, 17 July 2018 at 18:31:18 UTC, Russel Winder wrote: This would seem to imply that you can't use Vibe.d 0.8.4 with DMD 2.081.0. I think that regression was the main reason for early 2.081.1.
Re: Completely Remove C Runtime with DMD for win32
On Sunday, 15 July 2018 at 20:29:29 UTC, tcb wrote: Is it possible to completely remove the C runtime on windows, and if so how? This works for me: extern(C) int mainCRTStartup() { return 0; } dmd -m32mscoff -betterC -L/subsystem:CONSOLE main.d => 1.5 kB .exe.
Re: A Case for Oxidation: A potential missed opportunity for D
On Friday, 29 June 2018 at 11:47:51 UTC, Jonathan M Davis wrote: However, if you're trying to use D from a C or C++ application, the fact that you have to deal with starting up and shutting down druntime definitely causes problems. Good point, thanks.
Re: A Case for Oxidation: A potential missed opportunity for D
On Friday, 29 June 2018 at 11:31:17 UTC, Radu wrote: There technical and political reason here. BetterC offers a clean no-overhead strictly enforced subset of the language. This is great for porting over existing C code base and also for creating equivalent libs in D but without worrying that you carry over baggage from the D run-time. It also serves as a good tire 1 target when porting D to other platforms. WebAssembly is one of those odd platforms were D could shine, and having betterC greatly easy the effort of porting it over (even though so far nobody stepped out to do this). C is a beast and its hardcore programmers will not touch anything that has typeinfo, gc or classes. Selling betterC to them (this includes teammates) is a lot easier, you can show them the assembly on godblot.org and they see no extra fat is added. @safe is the added bonus and the final nail in the coffin to ditch C. But ultimately betterC is also a sign of the design failure on both dlang and druntime in the way that it wasn't conceived to be modular and easier to use in a pay-as-you-go fashion. Until the GC and typeinfo is truly optional and reserved only for the top layers of the standard library betterC is the best we have. Okay, thanks for the rationale. If it's mainly about Type-/ModuleInfos bloat and the GC, the compiler fed with -betterC could also * continue to elide the Type-/ModuleInfo emission, * additionally error out on all implicit druntime calls to the GC, * still link in druntime and Phobos. That should make more things work, incl. the above slice copy and (manually allocated) extern(C++) classes.
Re: A Case for Oxidation: A potential missed opportunity for D
On Friday, 29 June 2018 at 11:24:52 UTC, rikki cattermole wrote: It is a language feature yes, and it doesn't define /how/ it gets implemented. That's besides my actual point though (and I haven't even mentioned missing class support, which is everything but helping with developing against existing C++ codebases). My question is: what do people expect to gain by not linking in druntime and Phobos? Is there a feeling the binaries are unnecessarily bloated (-> minimal runtime)? Is it making cross-compilation harder (LDC has the ldc-build-runtime tool for that)? Is it the cozy feeling that the GC won't used at all? ...
Re: A Case for Oxidation: A potential missed opportunity for D
On Friday, 29 June 2018 at 11:04:30 UTC, rikki cattermole wrote: It greatly simplifies development against existing C/C++ codebases. How so? By telling people you can express C++: void cpy(char *dst, const char *src, size_t size) { for (size_t i; i < size; ++i) dst[i] = src[i]; } elegantly and safe like this in D: void cpy(void[] dst, void[] src) { dst[] = src[]; } unless they are using betterC (undefined reference to '_d_arraycopy')? Just to highlight one lost language feature.
Re: A Case for Oxidation: A potential missed opportunity for D
On Friday, 29 June 2018 at 10:00:09 UTC, Radu wrote: While not necessarily targeting bare metal, I'm very interested in a working version of @safe dlang. I believe that dlang with betterC, @safe, C/C++ inter-op and dip1000 will be huge for replacing C/C++. I'd love to hear some reasons for -betterC from a competent guy like yourself. I simply don't get what all the fuzz is about and what people expect to gain from losing druntime (and language features depending on it) and non-template-only Phobos. I understand the separate 'minimal runtime' need for bare metal (no Type- and ModuleInfos etc.), but I can't help myself in seeing betterC as, nicely put, worseD. I acknowledge that it seems to attract wide-spread interest, and I'd like to understand why.
Re: Disappointing performance from DMD/Phobos
On Tuesday, 26 June 2018 at 17:38:42 UTC, Manu wrote: I know, but it's still the reference compiler, and it should at least to a reasonable job at the kind of D code that it's *recommended* that users write. I get your point, but IMO it's all about efficient allocation of the manpower we have. Spending it on improving the inliner/optimizer or even adding ARM codegen as was recently suggested IIRC would be very unwise IMO; LDC and GDC are already there (and way beyond ;)), so I consider fixing bugs, tightening the language spec, improving tooling, druntime/Phobos, C++ interop (thx for your recent contributions there!) and so forth *way* more important than improving DMD codegen. Not too long ago (closed backend), DMD as reference compiler wasn't provided by Linux package managers (but LDC and GDC were), so it's not like D would automatically imply DMD. I don't recall ever being interested in whether a compiler for some language was the 'reference' one or not. I'm using unreleased 2.081, which isn't in LDC yet. We're (inofficially) at 2.081-beta.2 and fully green except for a very minor OSX debuginfo thingy, see https://github.com/ldc-developers/ldc/pull/2752. So expect a 1.11 beta as soon as 2.081 is released. Also, LDC seems to have more problems with debuginfo than DMD. Once LDC is on 2.081, I might have to flood their bugtracker with debuginfo related issues. Looking forward to that. ;) - CodeView debuginfo support is pretty new in LLVM (and not backed by Microsoft AFAIK, plus there have been regressions with LLVM 6). With LLVM 7, there's a new debuginfo intrinsic which I hope will allow us to significantly improve DI for LDC.
Re: DIP 1014:Hooking D's struct move semantics--Community Review Round 1
On Thursday, 17 May 2018 at 19:11:27 UTC, Shachar Shemesh wrote: On 17/05/18 18:47, kinke wrote: Since clang is able to compile this struct and do everything with it, and since the existence of the move constructor requires the precise same type of hooking as is needed in this case, I tend to believe that an IR representation of DIP 1014 is possible. I checked, and the reason is that D and C++ use a different ABI wrt. by-value passing of non-POD arguments. C++ indeed passes a reference to a caller-allocated rvalue, not just on Win64; that makes it trivial, as there are no moves across call boundaries. But your proposal may imply changing the D ABI accordingly.
Re: DIP 1014:Hooking D's struct move semantics--Community Review Round 1
On Thursday, 17 May 2018 at 15:23:50 UTC, kinke wrote: See IR for https://run.dlang.io/is/1JIsk7. I should probably emphasize that the LLVM `byval` attribute is strange at first sight. Pseudo-IR `void foo(S* byval param); ... foo(S* byarg arg);` doesn't mean that the IR callee gets the S* pointer from the IR callsite; it means 'memcpy(param, arg, S.sizeof)', with `param` being an *implicit* address in foo's parameters stack (calculated by LLVM and so exposed to the callee only). That's the difficulty for LDC I mentioned earlier.
Re: DIP 1014:Hooking D's struct move semantics--Community Review Round 1
On Thursday, 17 May 2018 at 12:36:29 UTC, Shachar Shemesh wrote: Again, as far as I know, structs are not copied when passed as arguments. They are allocated on the caller's stack and a reference is passed to the callee. If that's the case, no move (of any kind) is done. That's the exception to the rule (LDC's `ExplicitByvalRewrite`), and true for structs > 64 bit on Win64 (and some more structs) and something similar for AArch64. No other ABIs supported by LDC pass a low-level pointer to a caller-allocated copy for high-level pass-argument-by-value semantics; the argument is normally moved to the function parameter (in the callEE parameters stack). ``` struct S { size_t a, b; this(this) {} // no POD anymore } void foo(S param); void bar() { // allocate a temporary on the caller's stack and move it to the callee foo(S(1, 2)); S lvalue; // copy lvalue to a temporary on the caller's stack (incl. postblit call) // and then move that temporary to the callee foo(lvalue); import std.algorithm.mutation : move; // move move()-rvalue-result to the callee foo(move(lvalue)); } ``` 'Move to callee' for most ABIs meaning a bitcopy/blit to the callee's memory parameters stack, for LDC via LLVM `byval` attribute. See IR for https://run.dlang.io/is/1JIsk7.
Re: DIP 1014:Hooking D's struct move semantics--Community Review Round 1
3. When deciding to move a struct instance, the compiler MUST emit a call to the struct's __move_post_blt after blitting the instance and before releasing the memory containing the old instance. __move_post_blt MUST receive references to both the pre- and post-move instances. This implies that such structs must not be considered PODs, i.e., cannot be passed in registers and must be passed on the stack. It also means that the compiler will have to insert a __move_post_blt call right before the call (as the callee has no idea about the old address), after blitting the arg to the callee params stack; this may be tricky to implement for LDC, as that last blit is implicit in LLVM IR (LLVM byval attribute). As a side note, when passing a postblit-struct lvalue arg by value, the compiler first copies the lvalue to a temporary on the caller's stack, incl. postblit call, and then moves that copy to the callee. So this requires either a postblit+postmove combo on the caller side before the actual call, or a single postblit call for the final address (callee's param).
Re: Should 'in' Imply 'ref' as Well for Value Types?
On Saturday, 5 May 2018 at 15:22:04 UTC, Bolpat wrote: I once proposed that `in` can mean `const scope ref` that also binds rvalues. https://github.com/dlang/DIPs/pull/111#issuecomment-381911140 We could make `in` be something similar to `inline`. The compiler can implement it as stated above (assign the expression to temporary, reference it), or use copy if copy is cheaper than referencing. I remember, and I still like that proposal a lot, as it'd allow the compiler to tune generic code to the targeted platform and its ABI and free the dev from having to worry about how to pass a read-only input argument in the most efficient way. So if `in` semantics are ever to be redefined, `const [scope ref]` (depending on type and target ABI) are the only ones I'd happily agree with. [And I'd be extremely happy if rvalues could finally bind to ref params, not just as prerequisite for this.]
Re: Issues with debugging GC-related crashes #2
On Thursday, 19 April 2018 at 17:01:48 UTC, Matthias Klumpp wrote: Something that maybe is relevant though: I occasionally get the following SIGABRT crash in the tool on machines which have the SIGSEGV crash: ``` Thread 53 "appstream-gener" received signal SIGABRT, Aborted. [Switching to Thread 0x7fdfe98d4700 (LWP 7326)] 0x75040428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 0x75040428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x7504202a in __GI_abort () at abort.c:89 #2 0x00780ae0 in core.thread.Fiber.allocStack(ulong, ulong) (this=0x7fde0758a680, guardPageSize=4096, sz=20480) at src/core/thread.d:4606 #3 0x007807fc in _D4core6thread5Fiber6__ctorMFNbDFZvmmZCQBlQBjQBf (this=0x7fde0758a680, guardPageSize=4096, sz=16384, dg=...) at src/core/thread.d:4134 #4 0x006f9b31 in _D3std11concurrency__T9GeneratorTAyaZQp6__ctorMFDFZvZCQCaQBz__TQBpTQBiZQBx (this=0x7fde0758a680, dg=...) at /home/ubuntu/dtc/dmd/generated/linux/debug/64/../../../../../druntime/import/core/thread.d:4126 #5 0x006e9467 in _D5asgen8handlers11iconhandler5Theme21matchingIconFilenamesMFAyaSQCl5utils9ImageSizebZC3std11concurrency__T9GeneratorTQCfZQp (this=0x7fdea2747800, relaxedScalingRules=true, size=..., iname=...) at ../src/asgen/handlers/iconhandler.d:196 #6 0x006ea75a in _D5asgen8handlers11iconhandler11IconHandler21possibleIconFilenamesMFAyaSQCs5utils9ImageSizebZ9__lambda4MFZv (this=0x7fde0752bd00) at ../src/asgen/handlers/iconhandler.d:392 #7 0x0082fdfa in core.thread.Fiber.run() (this=0x7fde07528580) at src/core/thread.d:4436 #8 0x0082fd5d in fiber_entryPoint () at src/core/thread.d:3665 #9 0x in () ``` You probably already figured that the new Fiber seems to be allocating its 16KB-stack, with an additional 4 KB guard page at its bottom, via a 20 KB mmap() call. The abort seems to be triggered by mprotect() returning -1, i.e., a failure to disallow all access to the the guard page; so checking `errno` should help.
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 20:36:03 UTC, Johannes Pfau wrote: Actually this sounds very familiar: https://github.com/D-Programming-GDC/GDC/pull/236 Interesting, but I don't think it applies here. Both start and end addresses are 16-bytes aligned, and both cannot be accessed according to the stack trace (`pbot=0x7fcf4d721010 access memory at address 0x7fcf4d721010>, ptop=0x7fcf4e321010 `). That's quite interesting too: `memSize = 209153867776`. Don't know what exactly it is, but it's a pretty large number (~194 GB).
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 10:15:49 UTC, Kagamin wrote: There's a number of debugging options for GC, though not sure which ones are enabled in default debug build of druntime Speaking for LDC, none are, they all need to be enabled explicitly. There's a whole bunch of them (https://github.com/dlang/druntime/blob/master/src/gc/impl/conservative/gc.d#L20-L31), so enabling most of them would surely help in tracking this down, but it's most likely still going to be very tedious. I'm not really surprised that there are compilation errors when enabling the debug options, that's a likely fate of untested code unfortunately. If possible, I'd give static linking a try.
Re: rvalues -> ref (yup... again!)
On Tuesday, 27 March 2018 at 23:59:09 UTC, Rubn wrote: Just adding a few writeln it isn't able to remove the function entirely anymore and can't optimize it out. Well writeln() here involves number -> string formatting, GC, I/O, template bloat... There are indeed superfluous memcpy's in your foo() there (although the forward and bar calls are still inlined), which after a quick glance seem to be LLVM optimizer shortcomings, the IR emitted by LDC looks fine. For an abitrary external function, it's all fine as it should be, boiling down to a single memcpy in foo() and a direct memset in main(): https://run.dlang.io/is/O1aeLK
Re: rvalues -> ref (yup... again!)
On Tuesday, 27 March 2018 at 23:35:44 UTC, kinke wrote: On Tuesday, 27 March 2018 at 21:52:25 UTC, Rubn wrote: It happens with LDC too, not sure how it would be able to know to do any kind of optimization like that unless it was able to inline every single function called into one function and be able to do optimize it from there. I don't imagine that'll be likely though. It does it in your code sample with `-O`, there's no call to bar and the foo() by-value arg is memcpy'd to the global. For reference: https://run.dlang.io/is/2vDEXP Note that main() boils down to a `memset(&gfoo, 10, 1024); return 0;`: _Dmain: .cfi_startproc pushq %rax .Lcfi0: .cfi_def_cfa_offset 16 data16 leaqonlineapp.Foo onlineapp.gfoo@TLSGD(%rip), %rdi data16 data16 rex64 callq __tls_get_addr@PLT movl$10, %esi movl$1024, %edx movq%rax, %rdi callq memset@PLT xorl%eax, %eax popq%rcx retq
Re: rvalues -> ref (yup... again!)
On Tuesday, 27 March 2018 at 21:52:25 UTC, Rubn wrote: It happens with LDC too, not sure how it would be able to know to do any kind of optimization like that unless it was able to inline every single function called into one function and be able to do optimize it from there. I don't imagine that'll be likely though. It does it in your code sample with `-O`, there's no call to bar and the foo() by-value arg is memcpy'd to the global. If you compile everything with LTO, your code and all 3rd-party libs as well as druntime/Phobos, LLVM is able to optimize the whole program as if it were inside a single gigantic 'object' file in LLVM bitcode IR, and is thus indeed theoretically able to inline *all* functions.
Re: rvalues -> ref (yup... again!)
On Saturday, 24 March 2018 at 15:36:14 UTC, Timon Gehr wrote: On 24.03.2018 15:56, kinke wrote: I agree, but restricting it to const ref would be enough for almost all of my use cases. The MS C++ compiler just emits a warning when binding an rvalue to a mutable ref ('nonstandard extension used'), I'd find that absolutely viable for D too. ... A warning is not viable. (There's no good way to fix it.) As long as specific warnings cannot be suppressed via pragmas, one would need to predeclare the lvalue to get rid of it; fine IMHO for the, as I expect, very rare use cases. There is no difference between escaping refs to an rvalue and escaping refs to a short-lived lvalue, as the callee has no idea where the address is coming from anyway. According to Walter, ref parameters are not supposed to be escaped, and @safe will enforce it. Alright, the less keywords overhead, the better. :) You can add additional overloads on the D side. (This can even be automated using a string mixin.) Right I can, but I don't want to add 7 overloads for a C++ function taking 3 params by const ref. Even if autogenerated by some tool or fancy mixins, the code's legibility would suffer a lot. D's syntax is IMO one of its strongest selling points, and that shouldn't degrade when it comes to C(++) interop.
Re: rvalues -> ref (yup... again!)
On Saturday, 24 March 2018 at 13:49:13 UTC, Timon Gehr wrote: What I'm saying is that I don't really buy Jonathan's argument. Basically, you should just pass the correct arguments to functions, as you always need to do. If you cannot use the result of some mutation that you need to use, you will probably notice. I agree, but restricting it to const ref would be enough for almost all of my use cases. The MS C++ compiler just emits a warning when binding an rvalue to a mutable ref ('nonstandard extension used'), I'd find that absolutely viable for D too. There are only three sensible ways to fix the problem: 1. Just allow rvalue arguments to bind to ref parameters. (My preferred solution, though it will make the overloading rules slightly more complicated.) I always thought the main concern was potential escaping refs to the rvalue, which would be solvable by allowing rvalues to be bound to `scope ref` params only. That'd be my preferred choice as well. 2. Add some _new_ annotation for ref parameters that signifies that you want the same treatment for them that the implicit 'this' reference gets. (A close second.) *Shudder*. 3. Continue to require code bloat (auto ref) or manual boilerplate (overloads). (I'm not very fond of this option, but it does not require a language change.) While `auto ref` seems to have worked out surprisingly well for code written in D, it doesn't solve the problem when interfacing with (many) external C++ functions taking structs (represented in D by structs as well) by (mostly const) ref. You're forced to declare lvalues for all of these args, uglifying the code substantially.
Re: D and C++ undefined reference when namespace
On Thursday, 8 March 2018 at 18:56:04 UTC, Markus wrote: I tested dmd (2.079.0), gdc and ldc2. All got the same result. Which makes me think, that it's not a bug, but a "feature" :) C++ mangling is part of the DMD front-end shared by all 3 compilers, so no surprises there: https://github.com/dlang/dmd/blob/master/src/dmd/cppmangle.d
Re: Quora: Why hasn't D started to replace C++?
On Wednesday, 31 January 2018 at 11:42:14 UTC, Seb wrote: On Wednesday, 31 January 2018 at 10:35:06 UTC, Benny wrote: * three compilers Not sure why that's a bad thing. They all have their ups and downs: - dmd SUPER fast compilation - ldc multiarch + good optimization + cross-compilation - gdc multiarch + good optimization (in many cases better than LLVM) + cross-compilation + GNU Off topic, but I'm not aware of _many_ cases, so please let us know whenever you encounter something that GDC optimizes significantly better than LDC.
Re: Bump the minimal version required to compile DMD to 2.076.1
On Tuesday, 16 January 2018 at 13:09:06 UTC, Daniel Kozak wrote: On Tue, Jan 16, 2018 at 12:51 PM, Joakim via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On Monday, 15 January 2018 at 13:25:26 UTC, Daniel Kozak wrote: So why not to use cross compilation? As I said before, you could do that for the initial port, say cross-compiling a build of ldc master for DragonFly by using ldc master on linux. However, from then on, you'd either be stuck requiring all your DragonFly users to do the same or checking that cross-compiled DragonFly binary into a binary package repository somewhere. I don't think any OS does this, as usually the binary packages are all built from source. And this is exactly what many distributions do, so there is nothing wrong about it. There is no big difference between C++ compiler or D compiler, you still need to used some existing binary to build it from source. Where's the proof? ;) - At least for LDC, my impression is that Debian/Fedora/... build ltsmaster (C++-based 2.068) first, then use that one to build the latest version, and optionally let the latest version compile itself for the final release (Fedora).
Re: Arch Linux ldc package can't use asan
On Tuesday, 9 January 2018 at 15:27:56 UTC, Wild wrote: On Tuesday, 9 January 2018 at 15:19:37 UTC, Atila Neves wrote: I don't know who's the current maintainer of the Arch Linux D packages. ldc1.7.0 from the Arch repositories doesn't work with -fsanitize=address right now, it fails to link. I originally filed an ldc bug here: I will look into this. - Dan / The maintainer Thanks. The official package ships with a renamed copy of the LLVM compiler-rt library (matching the LLVM version LDC was built with), libldc_rt.asan-x86_64.a. If a copy is out of the question, a dependency on the package containing that lib (original name: libclang_rt.asan-x86_64.a) and a symlink may do the job. libFuzzer is handled like this as well. See https://github.com/ldc-developers/ldc/blob/v1.7.0/CMakeLists.txt#L742-L795.
Re: lld status
On Thursday, 21 December 2017 at 18:40:54 UTC, Andrei Alexandrescu wrote: I heard ldc already uses its embedded variant for linking programs (on Widows? Posix? 32bit? 64bit?). Currently only for Windows-MSVC targets (both 32 and 64 bits) and only when specifying the `-link-internally` switch. The host platform doesn't matter, i.e., it works for cross-linking from any Posix system too, even on ARM etc. ['Embedded variant' => we're linking in the static LLD libs and so share the common LLVM code in a single executable.] Can we distribute it as an alternative to optlink? Now that it's capable of outputting debuginfo .pdb's too (since v5.0, at least on Windows hosts), it should basically be fine. There's one catch though, and that's the rather big size of the executable (26 MB for the v5.0.1 32-bit executable when linked against the static MS runtime with VS 2017, with enabled LLVM backends for x86[_64], ARM, AArch64 and Nvidia PTX). That's due to LLD being a cross-linker by default, capable of outputting Windows, ELF and Mach-O binaries, and because of included codegen capabilities (for Link-Time Optimization), i.e., stuff that DMD doesn't need. Unfortunately, those features cannot be simply opted-out via CMake.
Re: Link errors when compiling shared lib on windows
On Wednesday, 1 November 2017 at 15:15:27 UTC, Daniel Fitzpatrick wrote: I am following this short tutorial on compiling a shared lib: https://wiki.dlang.org/Call_D_from_Ruby_using_FFI Because it's on Windows I am using these compiler options: -shared -m64 -defaultlib=libphobos2.so i.d However, I am receiving rather a lot of linker errors. Not sure what else to provide the compiler. Try less, especially no `-defaultlib` overridden with a Linux shared-object. You'll have to edit the hardcoded `./i.so` in the example as well obviously.
Re: Struct alignment
On Sunday, 24 September 2017 at 21:01:06 UTC, Johan Engelen wrote: So... what's correct? :-) 2.075+. ;) See https://github.com/dlang/dmd/pull/6754.
Awful dlang.org server performance
Okay so I'm (sadly) used to every ~10th forum.dlang.org web request to take something like 10-15 seconds to get a response (while the other ~9 are instantaneous). But the last couple of days, the Wiki is hardly usable (editing last night took > 1 minute for the page to reload), and Travis CI is not infrequently dying because fetching https://[nightlies.]dlang.org/install.sh takes too long. Are these problems known and worked on?
Re: zig
On Friday, 8 September 2017 at 08:07:25 UTC, Daniel N wrote: I was just reading the LLVM release notes. Looks quite interesting, did anyone try it? http://ziglang.org/ I noticed it too in the release notes, browsed to the page, scrolled down to the first code samples and was immediately put off by the syntax. ;)
Re: Events in D
On Wednesday, 30 August 2017 at 15:35:57 UTC, bitwise wrote: -What if I want an event to lock a shared mutex of the enclosing object, without storing a pointer to that mutex inside the event itself (and every single other event in the object)? -What if I want an event to call a method of the enclosing object when a handler is added (without keeping a pointer to it inside the actual event)? So in essence, you'd like something like this to work, right? struct Event(alias __parent, Handler) { enum parentHasLock = __traits(compiles, __parent.lock()); ... void opCall()(Parameters!Handler args) { static if (parentHasLock) __parent.lock(); ... } } struct Host1 { Event!Handler onChanged; Event!Handler onClosed; } and have the compiler internally instantiate something like Event!(/* parent type */ Host1, /* .offsetof in parent in order to deduce the __parent address from Event's &this */ 0, Handler) Event!(Host1, N, Handler)
Re: Events in D
On Tuesday, 29 August 2017 at 05:10:25 UTC, bitwise wrote: I needed some C# style events, so I rolled my own. Long story short, the result was unsatisfactory. [...] Anyways, I threw together some code while thinking about what an event may look like in D: [...] I like the C# event syntax too and came up with the following D analogon, just to prove that a primitive library-based solution in D is doable in 35 lines and can offer as much comfort as C# here. struct Event(Args) { alias CB = void delegate(Args); CB[] callbacks; void opOpAssign(string op)(CB handler) if (op == "+" || op == "-") { static if (op == "+") callbacks ~= handler; else { import std.algorithm.mutation : remove; callbacks = callbacks.remove!(x => x == handler); } } void opOpAssign(string op)(void function(Args) handler) if (op == "+" || op == "-") { import std.functional : toDelegate; opOpAssign!op(toDelegate(handler)); } void opCall(Args args) { foreach (cb; callbacks) cb(args); } bool opCast(T)() if (is(T == bool)) { return callbacks.length != 0; } } The following test code prints the expected output: struct S { int a; void handler(int arg) { printf("S.handler: this.a = %d, arg = %d\n", a, arg); } } void func(int arg) { printf("func: arg = %d\n", arg); } void main() { Event!int onChanged; auto s = S(666); assert(!onChanged); onChanged += (int arg) { printf("lambda: arg = %d\n", arg); }; onChanged += &func; onChanged += &s.handler; assert(onChanged); onChanged(1); onChanged -= &s.handler; onChanged(2); onChanged -= &func; onChanged(3); }
Re: void init of out variables
On Saturday, 19 August 2017 at 06:20:28 UTC, Nicholas Wilson wrote: is there a way to not assign to out variables? I don't think so. Is there a good reason not to return the matrix directly (taking advantage of in-place construction)? float[M][M] f() { float[M][M] mean = void; // init return mean; }
Re: LDC, ARM: unnecessary default initialization
On Friday, 18 August 2017 at 12:09:04 UTC, kinke wrote: On Friday, 18 August 2017 at 09:42:25 UTC, Jack Applegame wrote: For some reason, the LDC default initializes the structure, even if initialization of all its members is specified as void. I believe that this is wrong. Afaik, this has been brought up multiple times already and is so by design. Every aggregate has an init symbol, omitting that (and accordingly the default initialization of all instances) by initializing each field with void doesn't work. The initialization isn't performed fieldwise, but is a bitcopy of T.init. You can skip initialization of specific instances though - `S s = void;` - but again not if `s` is a field of another aggregate. Sorry, I forgot some workaround code: void ResetHandler() { Foo foo = void; foo.__ctor(10); // or: std.conv.emplace(&foo, 10); }
Re: LDC, ARM: unnecessary default initialization
On Friday, 18 August 2017 at 09:42:25 UTC, Jack Applegame wrote: For some reason, the LDC default initializes the structure, even if initialization of all its members is specified as void. I believe that this is wrong. Afaik, this has been brought up multiple times already and is so by design. Every aggregate has an init symbol, omitting that (and accordingly the default initialization of all instances) by initializing each field with void doesn't work. The initialization isn't performed fieldwise, but is a bitcopy of T.init. You can skip initialization of specific instances though - `S s = void;` - but again not if `s` is a field of another aggregate.
Re: The progress of D since 2013
Hi, On Monday, 31 July 2017 at 07:22:06 UTC, Maxim Fomin wrote: 1) Support of linking in win64? LDC: MSVC targets, both 32 and 64 bits, fully supported since a year or so. Requires Visual Studio 2015+. 2) What is the support of other platforms? AFAIK there was progress on Android. LDC: Quite good. All tests pass on Android, see Joakim Noah's work, but currently requires a tiny LLVM patch. That will be taken care of by LDC 1.4. All tests also passing on ARMv6+ on Linux. A guy got a vibe.d app to work successfully on an ARMv5 industrial controller. AArch64 support is underway... 4) What is the state of GDC/LDC? GDC team was actively working on including gdc in gcc project. And they succeeded, it has recently been accepted. Do gdc and ldc still pull D frontend, so there is essentially 1 frontend (where gdc and ldc frontends lag several versions behind) + 3 backends? More or less. LDC uses a slightly modified D front-end (yep, that's been officially converted to D in case you missed it), whereas Iain/GDC still uses a C++ one, with backports from newer D versions. The lag isn't that bad for LDC; LDC 1.3 uses the 2.073.2 front-end, current master the 2.074.1 one, and there's a WIP PR for 2.075.0, which already compiles.
Re: D easily overlooked?
On Wednesday, 26 July 2017 at 20:23:25 UTC, Ola Fosheim Grøstad wrote: so it doesn't make a whole lot of sense telling people to "improve" on it if they haven't even adopted it (in production). My point was improving vs. complaining. Both take some analysis to figure out an issue, but then some people step up and try to help improving things and some just let out their frustration, wondering why noone has been working on that particular oh-so-obvious thing, and possibly drop out, like all the like-minded guys before them.
Re: D easily overlooked?
On Wednesday, 26 July 2017 at 15:55:14 UTC, Wulfklaue wrote: But how about NOT always adding new feature and actually making things more easy for new people. People need to eventually understand that all the energy wasted for complaining about D/the community/whatever would be so much more valuable if put into contributions. I'm tired of these negative vibes, 'somebody else gotta do this in his/her spare time, I need it so bad!' paradigm. You for example reported a potential LDC issue you encountered (Visual Studio 2017 not autodetected), thanks for taking that time, but when I asked you to dig into it, you didn't even reply: https://github.com/ldc-developers/ldc/issues/2134. That's not the way issues are fixed and D can move forward, and neither is endlessly complaining about the (perceived) status quo. [I ignored this thread for now but was curious why it's still active, so I only read the last 2 posts.]
Re: Anyone relying on signaling NaNs?
On Saturday, 1 October 2016 at 19:10:47 UTC, Martin Nowak wrote: Just tried to fix the float/double initialization w/ signaling NaNs [¹], but it seems we can't reliably do that for all backends/architectures. Any additional move of float might convert SNaNs to QNaNs (quiet NaNs). This has also been the finding of other people [²][³]. The biggest problem w/ the current situation is that float fields of a struct sometimes are initialized to QNaNs and fail `s.field is float.init`. We thought about giving up on SNaNs as default float init values. Is anyone relying on them? I just had the same 'fun' with LDC. Both LDC 1.3 and DMD 2.074.0 produce special quiet NaNs for float.init and double.init on Win64 (both most significant mantissa bits set). I also tried to fix it, but it seems impossible when the x87 FPU (and not SSE) is used. This leads to a Win64 LDC build using signalling inits when cross-compiling via `-m32`, while the native Win32 LDC compiler uses quiet ones etc. So I'm all in for consistent special quiet NaNs as init values for all 3 floating-point types (already implemented, https://github.com/ldc-developers/ldc/pull/2207). If someone relies on signalling NaNs and missed the original post, here's your chance to speak up.
Re: Webassembly?
On Thursday, 6 July 2017 at 18:26:18 UTC, Joakim wrote: On Thursday, 6 July 2017 at 17:19:34 UTC, bauss wrote: On Thursday, 6 July 2017 at 15:34:08 UTC, Wulfklaue wrote: Is there a future where we can see WebAssembly as part of D? Seeing Rusts backbone already producing wasm is impressive. WebAssembly currently does not support a GC ...so it fair the assume that this will be the main issue for LDC? I see the move towards one language for back and front-end as the future. I'm just curious how it doesn't support GC? Like if you can allocate and free memory then you can have a GC. People usually point to this design doc: https://github.com/WebAssembly/design/blob/master/GC.md I don't know much about GC or their IR, but I get the impression it's more about integrating with the browser's GC, so you can seamlessly pass objects to javascript. I believe people have written their own GCs that target webasm, so the D GC can likely be made to do the same. Wow, quite an opportunity for D I'd say. A D frontend able to interop with the myriads of existing Javascript libraries, using the browser's GC and its DOM as GUI toolkit. Coupled with a vibe.d backend. Frontend and backend sharing D files describing their common interface.
Re: Compilation times and idiomatic D code
On Wednesday, 5 July 2017 at 20:12:40 UTC, H. S. Teoh wrote: I vaguely remember there was talk about compressing symbols when they get too long... is there any hope of seeing this realized in the near future? LDC has an experimental feature replacing long names by their hash; ldc2 -help: ... -hash-threshold=- Hash symbol names longer than this threshold (experimental)
Re: gdc is in
On Wednesday, 21 June 2017 at 15:11:39 UTC, Joakim wrote: Congratulations to Iain and the gdc team. :) +1. Awesome!
Re: How can I use ldc2 and link to full runtime on arm with no OS
On Tuesday, 20 June 2017 at 17:52:59 UTC, Dan Walmsley wrote: How do I link in the run time and gc, etc? In your case, you firstly need to cross-compile druntime to your target. This means compiling most files in the src subdirectory of LDC's druntime [1], excluding obvious ones like src\test_runner.d, src\core\sys, src\core\stdcpp etc. There are also a bunch of C and assembly files which need to be cross-compiled with a matching gcc. You'll need to do this manually via something along these lines: cross-gcc -c <.c files and .asm/S files> ldc2 -mtriple=... -lib -betterC -release -boundscheck=off <.o files generated above> -of=libdruntime.a Then try linking your minimal code against that druntime (and static C libs, as druntime is built on top of the C runtime, see [2]). Depending on what features you make use of in your code, you'll need to patch linked-in druntime modules to remove the OS dependencies and possibly reduce the C runtime dependencies as well. [1] https://github.com/ldc-developers/druntime. [2] http://forum.dlang.org/thread/mojmxbjwtfmioevuo...@forum.dlang.org
Re: simple ABI change to enable implicit conversion of functions to delegates?
On Monday, 15 May 2017 at 20:14:49 UTC, ag0aep6g wrote: Say, the function ABI uses EAX, EBX, and ECX for the first three arguments (in that order). For a function call `f(1, 2)` that means: EAX: 1 EBX: 2 ECX: not used For a delegate call `dg(1, 2)` I'd also put 1 and 2 into EAX and EBX. Additionally, the context pointer would be passed in ECX. Calls to normal functions are supposed to stay as they are. Only method/delegate calls should be affected. If you just want to append an extra context arg by passing it as last actual arg, it'll end up in the stack sooner or later, and that, I guess, is where bad things may happen by just pushing an additional arg, not matching the function signature.
Re: simple ABI change to enable implicit conversion of functions to delegates?
On Monday, 15 May 2017 at 17:03:20 UTC, ag0aep6g wrote: On 05/15/2017 02:27 PM, kinke wrote: Some additional context: https://github.com/dlang/dmd/pull/5232 What I take from that is that changing the way arguments are passed (particularly if they're reversed or not) is going to break a ton of stuff. Well, when I experimentally didn't reverse the args for extern(D) back then for LDC (after patching druntime/Phobos inline asm accordingly...), that single issue prevented a fully green testsuite. The problem is that druntime there goes the other way and invokes a method via a function pointer, so in essence the inverse of what you're after. The problem there is that this/context may be passed differently on Win64; I checked, and LDC only does it for `extern(C++)` for Visual C++ compatibility, not for extern(D), so OTOH the (absolutely unintuitive) resulting arguments order for Win64 should currently be: extern(C++) BigStruct freeFunC(Object this, int b, int c) => __sret, this, b, c extern(D) BigStruct freeFunD(Object this, int b, int c) => __sret, c, b, this extern(C++) BigStruct Object.funC(int b, int c) => __this, __sret, b, c extern(D) BigStruct Object.funD(int b, int c) => __sret, __this, c, b And yes, for Win32 there's the __thiscall convention, but also only for extern(C++). `extern(C++)` functions/delegates have to follow it, obviously. But then we can just say that implicit conversion doesn't work with those. Doesn't sound that bad as long as the front-end enforces it.
Re: simple ABI change to enable implicit conversion of functions to delegates?
On Monday, 15 May 2017 at 10:41:55 UTC, ag0aep6g wrote: TL;DR: Changing the ABI of delegates so that the context pointer is passed last would make functions implicitly convertible to delegates, no? In the discussion of issue 17156 [1], Eyal asks why functions (function pointers?) don't convert implicitly to delegates. Walter's answer is that their ABIs differ and that a wrapper would have to be generated to treat a function transparently as a delegate. As far as I understand, the problem is that the hidden context pointer of a delegate takes the first register, pushing the other parameters back. That means the visible arguments are passed in different registers than when calling a function. Some code to show this: void delegate(int a, int b) dg; void f(int a, int b) { import std.stdio; writeln(a, " ", b); } void main() { dg.funcptr = &f; /* This line should probably not compile, but that's another story. */ dg.ptr = cast(void*) 13; f(1, 2); /* prints "1 2" - no surprise */ dg(1, 2); /* prints "2 13" */ } Arguments are put into registers in reverse order. I.e., in a sense, the call `f(1, 2)` passes (2, 1) to f. And the call `dg(1, 2)` passes (13, 2, 1), because a delegate has a hidden last parameter: the context pointer. But `f` isn't compiled with such a hidden parameter, so it sees 13 in `b` and 2 in `a`. The register that holds 1 is simply ignored because there's no corresponding parameter. Now, what if we changed the ABI of delegates so that the context pointer is passed after the explicit arguments? That is, `dg(1, 2)` would pass (2, 1, 13). Then `f` would see 2 in b and 1 in a. It would ignore 13. Seems everything would just work then. This seems quite simple. But I'm most probably just too ignorant to see the problems. Why wouldn't this work? Maybe there's a reason why the context pointer has to be passed first? [1] https://issues.dlang.org/show_bug.cgi?id=17156 First of all, please don't forget that we're not only targeting X86, and that the args, according to the docs, shouldn't actually be reversed (incl. extern(D) - just on Win32, everywhere else the C ABI is to be followed). Then some ABIs, like Microsoft's, treat ` this` in a special way, not just like any other argument (in combination with struct-return), which would apply to method calls via a delegate with context = object reference. Some additional context: https://github.com/dlang/dmd/pull/5232
Re: NetBSD amd64: which way is the best for supporting 80 bits real/double?
On Thursday, 11 May 2017 at 11:31:58 UTC, Nikolay wrote: On Thursday, 11 May 2017 at 11:10:50 UTC, Joakim wrote: Well, if you don't like what's available and NetBSD doesn't provide them... up to you to decide where that leads. In any case it was not my decision. LDC does not use x87 for math functions on other OS's. LDC does use x87 reals on x86, the only exception I'm aware of being Windows (MSVC targets, MinGW would use x87), as the MS C runtimes don't support x87 at all (and they also define a 64-bit `long double` type, so the choice was pretty obvious). I don't have a strong opinion on whether the NetBSD x86 real should be 80 bits with a lot of tweaked tests or 64 bits. The latter is surely the simpler approach though.
Re: Immovable types
On Wednesday, 19 April 2017 at 02:53:18 UTC, Stanislav Blinov wrote: But it is always assumed that a value can be moved. It's not just assumed, it's a key requirement for structs in D, as the compiler can move stuff automatically this way (making a bitcopy and then eliding the postblit ctor for the new instance and the destructor for the moved-from instance). That is quite a different concept to C++, where a (non-elided) special move ctor is required, moved-from instances need to be reset so that their (non-elided) destructor doesn't free moved-from resources etc.
Re: What are we going to do about mobile?
On Thursday, 6 April 2017 at 05:24:07 UTC, Joakim wrote: D is currently built and optimized for that dying PC platform. I don't think x86 is dying soon, but I agree that embedded architectures get more important every day and should get more focus. I would even go so far as to say it may be worthwhile to develop an ARM backend for dmd. Wasted efforts in my view, there are so many other aspects regarding D which need to be worked on and polished, and we already have (unlike DMD, fully free!) D compilers able to target most architectures used on this planet (with varying level of support obviously, but at least the back-ends are already there). I really don't think DMD for ARM would increase D's popularity on embedded platforms in any way. More than anything else, we need the community to try building mobile libraries and apps, because compiler support is largely done. What LDC would primarily need is a CI platform supporting ARM (and ideally AArch64) in order to make it a true first-class target. We don't know of a free CI platform, so ARM isn't tested automatically, and it's currently mostly up to poor you to check for regressions. ;( Instead of working on an ARM backend for DMD, broadening the upstream runtime libraries for more architectures would make much more sense to me, as it's currently up to LDC and GDC with their severely limited manpower (and the even more limited available hardware to test on) to extend druntime/Phobos for non-x86 platforms. E.g., for AArch64, Phobos fully supporting quad-precision floating-point math would make things easier for us. And full big-endian support in Phobos would be nice for PowerPC targets.
Re: const(Class) is mangled as Class const* const
On Tuesday, 28 March 2017 at 14:11:16 UTC, deadalnix wrote: I understand you point and showed you why it isn't a mangling problem at all, and gave you direction you may want to dig in to make a proposal that may actually get traction. If you really did get my point, it should be clear that I don't see a necessity for differentiating between `const T` and `const(T)` on the D side in general, transitive const hasn't been an issue for my D/C++ interop use cases yet, but the const(Object) issue the OP arises would definitely be a show stopper at some point and, unlike a more general solution, can be trivially remedied.
Re: const(Class) is mangled as Class const* const
On Tuesday, 28 March 2017 at 12:55:02 UTC, deadalnix wrote: On Tuesday, 28 March 2017 at 08:30:43 UTC, kinke wrote: What I don't get is why it's considered important to have a matching C++ mangling for templates across D and C++ - what for? I only care about mangling wrt. If you still think this is a mangling problem, please reread my first response in this thread. You don't seem to get my point, I don't know why it's apparently that hard. I don't want to be able to express both `const T* const` AND `const T*` in C++, I only want D's const(Object) mangling to express solely the former instead of the latter, as there's no use for the double-const, except maybe for templates as Walter pointed out, but there's no template interop between D and C++ anyway. I absolutely don't care that it's inconsistent to what a D const(Object) reference actually is (both pointer and class const) when passing such a thing BY VALUE to C++. As I said, there's no C++ analogon for D object references, so why not have it be a special case in the C++ mangler...
Re: const(Class) is mangled as Class const* const
On Tuesday, 28 March 2017 at 02:14:25 UTC, Jonathan M Davis wrote: Realistically, unless D fully supports C++ (which pretty much means that it has to become C++ on some level), you're almost always going to be stuck with some sort of glue layer between D code and C++ code. There's no reasonable way around that. We can work to improve the situation so that more C++ stuff just works when hooking up to D, but we'll never get all the way there, because that would mean dragging C++ into D, which we really don't want. I know that. I'm just arguing for this tiny change in C++-mangling of D const object references as a remedy for this one particular issue out of many wrt. C++ interop. What I don't get is why it's considered important to have a matching C++ mangling for templates across D and C++ - what for? I only care about mangling wrt. interop if I wanna link to some foreign code, and with templates I can't, so I see absolutely no problem with a D template `extern (C++) void foo()(const(Object) arg)` being mangled slightly differently (arg as `const Object*`) than a C++ template `template void foo(const T arg)` with `T = const Object*` (arg as `const Object* const`). Don't get me wrong, I don't advocate C++-mangling a D `const(S*)` for some struct S as `const S*` at all - we're talking specifically about D class references here, which don't have a direct C++ analogon, so choosing to C++-mangle them in a special way sounds absolutely feasible to me.
Re: const(Class) is mangled as Class const* const
On Monday, 27 March 2017 at 22:24:26 UTC, Walter Bright wrote: On 3/27/2017 3:12 PM, kinke wrote: It's made to work with: const T which is the norm with C++ templates. Okay, so how exactly do I bind D code to a C++ header-only-template library? I thought that in that case you need a full D translation anyway... C++ templates are always header-only. I don't really understand your question. Yep, so there are no libs my D code can link to, so how am I supposed to use C++ templates from D (as you're using that as argument for the `const T *const` mangling)?
Re: const(Class) is mangled as Class const* const
On Monday, 27 March 2017 at 22:04:55 UTC, Walter Bright wrote: On 3/27/2017 1:41 PM, kinke wrote: Unfortunately, it's almost always the other way around - D code trying to interop with one of the gazillions existing C++ libs, and nobody wants to maintain his own fork with D-compatible glue interfaces. How often did you use `const T *const` vs. `const T *` in your C++ headers? ;) I think this would be a tiny change for D, breaking almost no code and well worth the reduction in required 'flexibility on the C++ side'. It's made to work with: const T which is the norm with C++ templates. Okay, so how exactly do I bind D code to a C++ header-only-template library? I thought that in that case you need a full D translation anyway...
Re: const(Class) is mangled as Class const* const
On Monday, 27 March 2017 at 20:09:35 UTC, Walter Bright wrote: Whichever way it is mangled will gore someone's ox. D went with the simplest mangling solution, which is to mangle all C++ const pointers as "head const". [...] I suggest a simpler way - declare the C++ side of the D interface in a way that matches the way D mangles it. It's always been true that in order to interface D with C++ you'll need to be a bit flexible on the C++ side. Unfortunately, it's almost always the other way around - D code trying to interop with one of the gazillions existing C++ libs, and nobody wants to maintain his own fork with D-compatible glue interfaces. How often did you use `const T *const` vs. `const T *` in your C++ headers? ;) I think this would be a tiny change for D, breaking almost no code and well worth the reduction in required 'flexibility on the C++ side'.
Re: const(Class) is mangled as Class const* const
On Sunday, 26 March 2017 at 17:41:57 UTC, Benjamin Thaut wrote: There are thousands of C++ libraries out there that can't be bound to D because they use const Class* instead of const Class* const. So in my eyes there is definitly something wrong with the C++ mangling of D. I agree that C++-mangling a const D object reference as `const T *const` isn't helpful although it would be consistent with D semantics. As deadalnix pointed out, the const for the pointer itself only concerns the callee and not the caller. I sometimes use `void foo(const T *bla); ... void foo(const T *const bla) { ... }` if I find it useful to make clear that `bla` won't change in my foo() implementation, but I never use the second const in the function declaration in the header as it's just useless clutter for the caller. Having said that, you can only declare a C++ type as D class if it's exclusively passed and returned as pointer (at least in the parts you are going to interface with via D). This was true for the C++-based DMD front-end and would also be true for some types used in LLVM. But as soon as you want to interface with a C++ function taking an object as `[const] T&`, afaik you're f*cked and need to declare it as D struct. So I'm quite skeptical that I'll often be able to use D classes to represent C++ types.
Re: D in China?
On Monday, 20 March 2017 at 12:44:32 UTC, Laeeth Isharc wrote: becoming more involved in the Chinese open-source community I thought we had left behind nations and borders in the open-source community. - Sorry, I couldn't resist. ;)
Re: Phobos and LTO
On Tuesday, 7 March 2017 at 18:46:15 UTC, Johan Engelen wrote: On Tuesday, 7 March 2017 at 18:42:40 UTC, Johan Engelen wrote: It works on OS X too. And OS X is the only platform for which we package the LTO linker binaries in the release. Has anybody tried LLD on Windows for D already? https://lld.llvm.org/windows_support.html If LLD works (or another linker that can use the LLVM plugin), then LTO is also available on Windows. -Johan Yep, I gave LLD 3.9 a try on Win64 some weeks ago. Works out of the box as drop-in replacement for Microsoft's link.exe, incl. usage of environment variables. What's apparently still lacking is debuginfos (.pdb) generation, so our CDB debugging tests failed (all others worked IIRC). LLD is supposed to be significantly faster than Microsoft's linker; I haven't done any measurements yet. Besides offering nice stuff like LTO, integrating LLD should allow LDC to directly cross-compile *and* cross-link. So you'll only need the target system libs to produce objects, libraries and executables for 'any' target. Which is pretty awesome.
DPaste using ancient LDC
I'm slightly annoyed by DPaste providing a single ancient LDC version (0.12, 2.063 front-end...). I wouldn't mind as long as it wouldn't boldly state `We provide always up-to-date compilers collection!` and it wasn't the first result when googling for "dlang online compiler" (I prefer d.godbolt.org, which provides recent LDC/GDC versions, but no DMD).
Re: syntax sugar: std.path::buildPath instead of from!"std.path".buildPath
On Tuesday, 14 February 2017 at 20:03:13 UTC, Jonathan M Davis wrote: That being said, at some point, you have to ask whether each added feature is worth the cost when you consider how it's going to clutter up function signatures even further. And while I do think that there is value in DIP 1005 and the proposed from template, I also think that it's taking it too far. IMHO, it's just not worth marking functions even further - at least not in most code. Maybe it's worth it in something like Phobos where everyone is using it and benefiting from the compilation speed up, but Walter has been wanting to implement lazy imports anyway, and that would fix the problem without doing anything to any function signatures. It does lose the benefits of tying the imports to the function, but personally, I don't think that that's's worth the extra cost of further cluttering up the function signature. As it is, I'm increasingly of the opinion that local and selective imports aren't worth it. It's just so much nicer to able to slap the required imports at the beginning of the module and forget about them than having to worry about maintaining a list of selective imports or have all of the extra import lines inside of all of the functions. And adding imports to the function signatures is just making the whole local import situation that much worse. +1. D's beautiful syntax plays a key role for attracting new folks, and I see it endangered by recent developments.
Re: Updating Windows SDK static libraries of the DMD distribution
On Tuesday, 14 February 2017 at 14:11:31 UTC, Sönke Ludwig wrote: It's a quite frequent issue to get unresolved externals on Windows, because the lib files of the Windows platform SDK are still stuck at Windows XP age. It would make a lot of sense to update those to the latest Windows 10 SDK, but I couldn't find a place where those are present physically, except for the release archives. Does anyone know where those are stored or has the means to update them? Martin? My 2 cents: (serious) Windows devs should install their own Visual C++ (linker + C runtime) & WinSDK and use DMD with `-m32mscoff` or `-m64` (to use MS linker instead of OptLink). The Win10 SDK doesn't support XP afaik and so may not be suited for all users; there's no 'one fits them all'. I don't see a big future for OptLink and the bundled libs; it's only convenient for beginners so that they don't need the MS stuff and can get started with the DMD redistributable alone. The other DMD redistributables don't ship with a linker and system libs either (and you don't need a full-fledged Visual Studio installation anymore). And once everybody finally switches to 64-bit, OptLink is dead anyway.
Re: pragma(mangle,"name") for a type?
Side note: Microsoft uses a different C++ mangling...
Re: `in` no longer same as `const ref`
On Monday, 30 January 2017 at 19:05:33 UTC, Q. Schroll wrote: Can't we make "in" mean "const scope ref", that binds on r-values, too? Effectively, that's (similar to) what "const T&" in C++ means. It's a non-copying const view on the object. We have the longstanding problem, one must overload a function to effectively bind both l- and r-values. That is what I'd suppose to be the dual to "out". +1000, I really love this proposal; I *hate* that this point is still missing in D in 2017. It shouldn't even break existing code using `in` (but obviously the ABI) just because now a pointer to a const instance is passed instead of making a const copy. Generic stuff: void foo(in T arg)// const scope ref => safe `const T&` By-value optimizations: void foo(const T arg) I love it.
C++ interop
I was wondering whether C++ interop is already considered sufficiently working enough, as I don't see any plans for improving it in the H1 2017 vision, except for the `C++ stdlib interface` bullet point. IMO, the main obstacles for mixed D/C++ RAII-style code are: 1) Constructors don't work across the C++/D language barrier, as they are mangled differently and slightly differ in semantics (D ctors assume the instance is pre-initialized with T.init) => currently need to implement them on both sides. Additionally, D structs cannot have a non-disabled parameter-less constructor. 2) Destructors need to be implemented on both sides as well. 3) Copy/move constructors/assignment operators too. I think D could do a lot better. Constructors for example: // D extern(C++) struct T { this(bool a); // declares C++ ctor T::T(bool) extern(D) this(int a) { // Generates D ctor T::__ctor(int) and C++ ctor T::T(int). // The C++ ctor is implemented as `{ this = T.init; this.__ctor(a); }` // to initialize the memory allocated by C++ callers. // D clients call the D __ctor directly to avoid double-initialization; // that's what the extern(D) makes explicit. } } // C++ struct T { T(bool a) { // Callable from D; instance will be initialized twice then. } T(int a); // declares the C++ ctor wrapper emitted by the D compiler }; Similarly, the D compiler could generate explicit C++ copy and move constructors automatically if the extern(C++) struct has an extern(D) postblit ctor, so that C++ clients only need to declare them. `extern(C++) this(this);` postblit ctor declarations could be used to make D clients call copy/move ctors implemented in C++ etc...
Re: std.math API rework
On Thursday, 6 October 2016 at 20:55:55 UTC, Ilya Yaroshenko wrote: So, I don't see a reason why this change break something, hehe No, Iain is right. These LLVM intrinsics are most often simple forwarders to the C runtime functions; I was rather negatively surprised to find out a while ago.
Re: How to link *.a file on Windows?
On Thursday, 22 September 2016 at 17:09:38 UTC, Brian wrote: I use cygwin build a C++ lib file: libmemcached.a but, how to link it to my dub project? You'd probably need to use the GNU linker then, but the D objects would need to match the format used for your C++ lib. You could give the GDC compiler a shot, or use clang to compile the C++ lib to MS COFF format (.lib), which can then be fed to both DMD and LDC on Windows.
Re: Optimisation possibilities: current, future and enhancements
On Thursday, 25 August 2016 at 18:15:47 UTC, Basile B. wrote: The problem here that the example is bad with too agressive optimizations because the CALLs are eliminated despite of no inlining. [...] int use(const(Foo) foo) { return foo.foo() + foo.foo(); } From my perspective, the problem with this example isn't missed optimization potential. It's the code itself. Why waste implementation efforts for such optimizations, if that would only reward people writing such ugly code with an equal performance to a more sane `2 * foo.foo()`? The latter is a) shorter, b) also faster with optimizations turned off and c) IMO simply clearer.
Re: Optimisation possibilities: current, future and enhancements
On Thursday, 25 August 2016 at 18:09:14 UTC, Cecil Ward wrote: On Thursday, 25 August 2016 at 18:07:14 UTC, Cecil Ward wrote: On Thursday, 25 August 2016 at 17:22:27 UTC, kinke wrote: [...] I think that here the optimisation is only because LDC can “see” the text of the method. When expansion is not possible, that would be the real test. (Assuming LDC behaves like GDC. I'm unfamiliar with LDC, I'm ashamed to admit.) You're right. The question is whether it pays off to optimize heavily for externals. If you build all modules of a binary at once via `ldmd2 m1.d m2.d ...` or via `ldc2 -singleobj m1.d m2.d ...`, LDC emits all the code into a single LLVM module, which can then be optimized very aggressively. So call graphs inside the binary are taken care of, so if it's a well encapsulated library with few (or expensive) calls to externals, it doesn't matter much. druntime and Phobos are treated as externals. But Johan Engelen already pointed out that LDC could ship with them as LLVM bitcode libraries and then link them in before machine code generation...
Re: Optimisation possibilities: current, future and enhancements
I found it hard to believe LDC generates such crappy code when optimizing. These are my results using LDC master on Win64 (`ldc2 -O -release -output-s`): struct Foo { immutable _u = 8; int foo() const { return 8 * _u; } } int use(ref const(Foo) foo) { return foo.foo() + foo.foo(); } int main() { Foo f; return use(f); } _D7current3Foo3fooMxFZi: movl(%rcx), %eax shll$3, %eax retq _D7current3useFKxS7current3FooZi: movl(%rcx), %eax shll$4, %eax retq _Dmain: movl$128, %eax retq Sure, Foo.foo() and use() could return a constant, but otherwise it can't get much better than this.
Re: Usability of D on windows?
On Wednesday, 24 August 2016 at 21:13:45 UTC, Guillaume Piolat wrote: A minor problem is that on Windows users expect both x86 and x86_64 builds so one has to juggle with the 2 LDC PATH to release both. I've said this thrice already and it's quite minor really. There's a multilib edition for the CI builds for the time being. Just make NOT to run it inside a 'VS Tools Command Prompt', so that LDC can set up the MSVC++ environment variables for 32/64 bit linking. Honestly since 1.0.0-b2 it's pure bliss and I've come to trust it very much. Thanks, appreciated!
Re: Slice expressions - exact evaluation order, dollar
On Wednesday, 13 July 2016 at 21:06:28 UTC, kinke wrote: On Monday, 27 June 2016 at 02:38:22 UTC, Timon Gehr wrote: The point is that the slice expression itself does or does not see the updates based on whether I wrap base in a lambda or not. I don't really see a necessity for the lambda to return the same kind (lvalue/rvalue) of value as the expression directly. Oh, that's actually https://issues.dlang.org/show_bug.cgi?id=16271. So lambda wrapping isn't the issue here. It's just that both ways of dealing with the base are possible and arguably plausible. Is the current DMD way (base treated as rvalue) the one to be followed or has just nobody given this a deeper thought yet?
Re: Slice expressions - exact evaluation order, dollar
On Monday, 27 June 2016 at 02:38:22 UTC, Timon Gehr wrote: As far as I understand, for the first expression, code gen will generate a reference to a temporary copy of base, and for the second expression, it will generate a reference to base directly. If lwr() or upr() then update the ptr and/or the length of base, those changes will be seen for the second slice expression, but not for the first. Exactly. That's what I initially asked in Should the returned slice be based on the slicee's buffer before or after evaluating the bounds expressions? So Timon prefers the pre-buffer (apparently what DMD does), GDC does the post-buffer, and LDC buggily something inbetween (for $, we treat base.length as lvalue, but we load base.ptr before evaluating the bounds, hence treating base as rvalue there). Can we agree on something, add corresponding tests and make sure CTFE works exactly the same? %) The point is that the slice expression itself does or does not see the updates based on whether I wrap base in a lambda or not. I don't really see a necessity for the lambda to return the same kind (lvalue/rvalue) of value as the expression directly.
Re: Slice expressions - exact evaluation order, dollar
On Sunday, 26 June 2016 at 08:08:58 UTC, Iain Buclaw wrote: Now when creating temporaries of references, the reference is stabilized instead. New codegen: *(_ptr = getBase()); _lwr = getLowerBound(_ptr.length); _upr = getUpperBound(_ptr.length); r = {.length=(_upr - _lwr), .ptr=_ptr.ptr + _lwr * 4}; --- I suggest you fix LDC if it doesn't already do this. :-) Thx for the replies - so my testcase works for GDC already? So since what GDC is doing is what I came up for independently for LDC (PR #1566), I'd say DMD needs to follow suit.
Re: Slice expressions - exact evaluation order, dollar
Ping. Let's clearly define these hairy evaluation order details and add corresponding tests; that'd be another advantage over C++.
Slice expressions - exact evaluation order, dollar
The following snippet is interesting: <<< __gshared int step = 0; __gshared int[] globalArray; ref int[] getBase() { assert(step == 0); ++step; return globalArray; } int getLowerBound(size_t dollar) { assert(step == 1); ++step; assert(dollar == 0); globalArray = [ 666 ]; return 1; } int getUpperBound(size_t dollar) { assert(step == 2); ++step; assert(dollar == 1); globalArray = [ 1, 2, 3 ]; return 3; } // LDC issue #1433 void main() { auto r = getBase()[getLowerBound($) .. getUpperBound($)]; assert(r == [ 2, 3 ]); } Firstly, it fails with DMD 2.071 because $ in the upper bound expression is 0, i.e., it doesn't reflect the updated length (1) after evaluating the lower bound expression. LDC does. Secondly, DMD 2.071 throws a RangeError, most likely because it's using the initial length for the bounds checks too. Most interesting IMO though is the question when the slicee's pointer is to be loaded. This is only relevant if the base is an lvalue and may therefore be modified when evaluating the bound expressions. Should the returned slice be based on the slicee's buffer before or after evaluating the bounds expressions? This has been triggered by https://github.com/ldc-developers/ldc/issues/1433 as LDC loads the pointer before evaluating the bounds.
Re: Good project: stride() with constant stride value
On Friday, 4 March 2016 at 20:14:41 UTC, Andrei Alexandrescu wrote: This is just speculation. When the stride is passed to larger functions the value of the stride is long lost. I understand the desire for nice and simple code but sadly the stdlib is not a good place for it - everything must be tightly optimized. The value of the project stands. -- Andrei With that argument, we might end up with druntime and Phobos completely in manually-tweaked inline assembly to compensate for simpler back-ends. I'm obviously exaggerating, but unless you can show that a compile-time version really provides a significant boost for optimized GDC/LDC builds too, I don't see the project's value.
Re: Good project: stride() with constant stride value
On Friday, 4 March 2016 at 17:49:09 UTC, John Colvin wrote: Surely after inlining (I mean real inlining, not dmd) it makes no difference, a constant is a constant? I remember doing tests of things like that and finding that not only did it not make a difference to performance, ldc produced near-identical asm either way. Then let's not complicate Phobos please. I'm really no friend of special semantics for `step == 0` and stuff like that. Let's keep code as readable and simple as possible, especially in the standard libraries, and let the compilers do their job at optimizing low-level stuff for release builds. More templates surely impact compilation speed, and that's where DMD shines.
Re: Running DMD tests on Windows / build requirements
On Sunday, 21 February 2016 at 16:45:14 UTC, Martin Krejcirik wrote: LINK : fatal error LNK1104: cannot open file 'libucrt.lib' --- errorlevel 1104 Stock dmd doesn't require libucrt.lib. If I copy it to dmd2\windows\lib64 a get: D:\devel\bugs>dmd -m64 utfbug DMD v2.068 DEBUG phobos64.lib(dmain2_629_47b.obj) : error LNK2019: unresolved external symbol __iob_func referenced in function _d_run_main phobos64.lib(dmain2_629_47b.obj) : error LNK2019: unresolved external symbol _set_output_format referenced in function _d_run_main phobos64.lib(config_487_452.obj) : error LNK2019: unresolved external symbol sscanf referenced in function _D2gc6config13__T5parseHTfZ5parseFNbNiAxaKAxaKfZb utfbug.exe : fatal error LNK1120: 3 unresolved externals --- errorlevel 1120 I wouldn't copy headers and libs around, just use properly set-up environment variables (INCLUDE, LIB, LIBPATH etc. - these are set by Visual Studio's vcvarsall.bat, maybe there's something similar for the build tools package as well). Anyway, the latest linking errors are due to some heavy changes in Microsoft's C runtime with VS 2015. Phobos v2.068 doesn't support it yet, so you may want to try the latest version instead.