#dbugfix 16746 - Output Makefile-style depfiles for ninja and make
Hi! Since this issue came up again here just now, I would like to draw your attention to this particular bug. It hinders integration with widely used build systems like Automake, CMake, Meson and build tools like ninja/make and in general results in very suprising results especially when using templates in code that is split-built. A patch is available in a PR against dmd, but there was no new activity since last March, so it would be nice to get feedback and have the change finally merged in some form. Bug: https://issues.dlang.org/show_bug.cgi?id=16746 PR: https://github.com/dlang/dmd/pull/6961 Thank you and greetings from the Debian developer's conference in Taiwan (with more interest in D here than expected :-) ) Cheers, Matthias
Re: Issues with debugging GC-related crashes #2
On Friday, 20 April 2018 at 18:30:30 UTC, Matthias Klumpp wrote: On Friday, 20 April 2018 at 05:32:32 UTC, Dmitry Olshansky wrote: On Friday, 20 April 2018 at 00:11:25 UTC, Matthias Klumpp wrote: On Thursday, 19 April 2018 at 18:45:41 UTC, kinke wrote: [...] [...] I think the order of operations is wrong, here is an example from containers: allocator.dispose(buckets); static if (useGC) GC.removeRange(buckets.ptr); If GC triggers between dispose and removeRange, it will likely segfault. Indeed! It's also the only place where this is shuffled around, all other parts of the containers library do this properly. The thing I wonder about is though, that the crash usually appeared in an explicit GC.collect() call when the application was not running multiple threads. At that point, the GC - as far as I know - couldn't have triggered after the buckets were disposed of and the ranges were removed. But maybe I am wrong with that assumption. This crash would be explained perfectly by that bug. Turns out that was indeed the case! I created a small testcase which managed to very reliably reproduce the issue on all machines that I tested it on. After reordering the dispose/removeRange, the crashes went away completely. I submitted a pull request to the containers library to fix this issue: https://github.com/dlang-community/containers/pull/107 I will also try to get the patch into the components in Debian and Ubuntu, so we can maybe have a chance of updating the software center metadata for Ubuntu before 18.04 LTS releases next week. Since asgen uses HashMaps for pretty much everything, an most of the time with GC-managed elements, this should improve the stability of the application greatly. Thanks a lot for the help in debugging this, I learned a lot about DRuntime internals in the process. Also, it is no exaggeration to say that the appstream-generator project would not be written in D (there was a Rust prototype once...) and I would probably not be using D as much (or at all) without the helpful community around it. Thank you :-)
Re: Issues with debugging GC-related crashes #2
On Friday, 20 April 2018 at 05:32:32 UTC, Dmitry Olshansky wrote: On Friday, 20 April 2018 at 00:11:25 UTC, Matthias Klumpp wrote: On Thursday, 19 April 2018 at 18:45:41 UTC, kinke wrote: [...] Jup, I did that already, it just took a really long time to run because when I made the change to print errno I also enabled detailed GC profiling (via the PRINTF* debug options). Enabling the INVARIANT option for the GC is completely broken by the way, I enforced the compile to work by casting to shared, with the result of the GC locking up forever at the start of the program. [...] I think the order of operations is wrong, here is an example from containers: allocator.dispose(buckets); static if (useGC) GC.removeRange(buckets.ptr); If GC triggers between dispose and removeRange, it will likely segfault. Indeed! It's also the only place where this is shuffled around, all other parts of the containers library do this properly. The thing I wonder about is though, that the crash usually appeared in an explicit GC.collect() call when the application was not running multiple threads. At that point, the GC - as far as I know - couldn't have triggered after the buckets were disposed of and the ranges were removed. But maybe I am wrong with that assumption. This crash would be explained perfectly by that bug.
Re: Issues with debugging GC-related crashes #2
On Friday, 20 April 2018 at 00:11:25 UTC, Matthias Klumpp wrote: [...] Jup, I did that already, it just took a really long time to run because when I made the change to print errno [...] I forgot to mention that, the error code was 12, ENOMEM, so this is actually likely not a relevant issue afterall.
Re: Issues with debugging GC-related crashes #2
On Thursday, 19 April 2018 at 18:45:41 UTC, kinke wrote: On Thursday, 19 April 2018 at 17:01:48 UTC, Matthias Klumpp wrote: Something that maybe is relevant though: I occasionally get the following SIGABRT crash in the tool on machines which have the SIGSEGV crash: ``` Thread 53 "appstream-gener" received signal SIGABRT, Aborted. [Switching to Thread 0x7fdfe98d4700 (LWP 7326)] 0x75040428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 0x75040428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x7504202a in __GI_abort () at abort.c:89 #2 0x00780ae0 in core.thread.Fiber.allocStack(ulong, ulong) (this=0x7fde0758a680, guardPageSize=4096, sz=20480) at src/core/thread.d:4606 #3 0x007807fc in _D4core6thread5Fiber6__ctorMFNbDFZvmmZCQBlQBjQBf (this=0x7fde0758a680, guardPageSize=4096, sz=16384, dg=...) at src/core/thread.d:4134 #4 0x006f9b31 in _D3std11concurrency__T9GeneratorTAyaZQp6__ctorMFDFZvZCQCaQBz__TQBpTQBiZQBx (this=0x7fde0758a680, dg=...) at /home/ubuntu/dtc/dmd/generated/linux/debug/64/../../../../../druntime/import/core/thread.d:4126 #5 0x006e9467 in _D5asgen8handlers11iconhandler5Theme21matchingIconFilenamesMFAyaSQCl5utils9ImageSizebZC3std11concurrency__T9GeneratorTQCfZQp (this=0x7fdea2747800, relaxedScalingRules=true, size=..., iname=...) at ../src/asgen/handlers/iconhandler.d:196 #6 0x006ea75a in _D5asgen8handlers11iconhandler11IconHandler21possibleIconFilenamesMFAyaSQCs5utils9ImageSizebZ9__lambda4MFZv (this=0x7fde0752bd00) at ../src/asgen/handlers/iconhandler.d:392 #7 0x0082fdfa in core.thread.Fiber.run() (this=0x7fde07528580) at src/core/thread.d:4436 #8 0x0082fd5d in fiber_entryPoint () at src/core/thread.d:3665 #9 0x in () ``` You probably already figured that the new Fiber seems to be allocating its 16KB-stack, with an additional 4 KB guard page at its bottom, via a 20 KB mmap() call. The abort seems to be triggered by mprotect() returning -1, i.e., a failure to disallow all access to the the guard page; so checking `errno` should help. Jup, I did that already, it just took a really long time to run because when I made the change to print errno I also enabled detailed GC profiling (via the PRINTF* debug options). Enabling the INVARIANT option for the GC is completely broken by the way, I enforced the compile to work by casting to shared, with the result of the GC locking up forever at the start of the program. Anyway, I think for a chance I actually produced some useful information via the GC debug options: Given the following crash: ``` #0 0x007f1d94 in _D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv (this=..., ptop=0x7fdfce7fc010, pbot=0x7fdfcdbfc010) at src/gc/impl/conservative/gc.d:1990 p1 = 0x7fdfcdbfc010 p2 = 0x7fdfce7fc010 stackPos = 0 [...] ``` The scanned range seemed fairly odd to me, so I searched for it in the (very verbose!) GC debug output, which yielded: ``` 235.25: 0xc4f090.Gcx::addRange(0x8264230, 0x8264270) 235.244460: 0xc4f090.Gcx::addRange(0x7fdfcdbfc010, 0x7fdfce7fc010) 235.253861: 0xc4f090.Gcx::addRange(0x8264300, 0x8264340) 235.253873: 0xc4f090.Gcx::addRange(0x8264390, 0x82643d0) ``` So, something is calling addRange explicitly there, causing the GC to scan a range that it shouldn't scan. Since my code doesn't add ranges to the GC, and I looked at the generated code from girtod/GtkD and it very much looks fine to me, I am currently looking into EMSI containers[1] as the possible culprit. That library being the issue would also make perfect sense, because this issue started to appear with such a frequency only after containers were added (there was a GC-related crash before, but that might have been a different one). So, I will look into that addRange call next. [1]: https://github.com/dlang-community/containers
Re: Issues with debugging GC-related crashes #2
On Thursday, 19 April 2018 at 08:30:45 UTC, Kagamin wrote: On Wednesday, 18 April 2018 at 22:24:13 UTC, Matthias Klumpp wrote: size_t memSize = pooltable.maxAddr - minAddr; (https://github.com/ldc-developers/druntime/blob/ldc/src/gc/impl/conservative/gc.d#L1982 ) That wouldn't make sense for a pool size... The machine this is running on has 16G memory, at the time of the crash the software was using ~2.1G memory, with 130G virtual memory due to LMDB memory mapping (I wonder what happens if I reduce that...) If big LMDB mapping causes a problem, try a test like this: --- import core.memory; void testLMDB() { //how do you use it? } void test1() { void*[][] a; foreach(i;0..10)a~=new void*[1]; void*[][] b; foreach(i;0..10)b~=new void*[1]; b=null; GC.collect(); testLMDB(); GC.collect(); foreach(i;0..10)a~=new void*[1]; foreach(i;0..10)b~=new void*[1]; b=null; GC.collect(); } --- I tried something similar, with no effect. Something that maybe is relevant though: I occasionally get the following SIGABRT crash in the tool on machines which have the SIGSEGV crash: ``` Thread 53 "appstream-gener" received signal SIGABRT, Aborted. [Switching to Thread 0x7fdfe98d4700 (LWP 7326)] 0x75040428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 0x75040428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x7504202a in __GI_abort () at abort.c:89 #2 0x00780ae0 in core.thread.Fiber.allocStack(ulong, ulong) (this=0x7fde0758a680, guardPageSize=4096, sz=20480) at src/core/thread.d:4606 #3 0x007807fc in _D4core6thread5Fiber6__ctorMFNbDFZvmmZCQBlQBjQBf (this=0x7fde0758a680, guardPageSize=4096, sz=16384, dg=...) at src/core/thread.d:4134 #4 0x006f9b31 in _D3std11concurrency__T9GeneratorTAyaZQp6__ctorMFDFZvZCQCaQBz__TQBpTQBiZQBx (this=0x7fde0758a680, dg=...) at /home/ubuntu/dtc/dmd/generated/linux/debug/64/../../../../../druntime/import/core/thread.d:4126 #5 0x006e9467 in _D5asgen8handlers11iconhandler5Theme21matchingIconFilenamesMFAyaSQCl5utils9ImageSizebZC3std11concurrency__T9GeneratorTQCfZQp (this=0x7fdea2747800, relaxedScalingRules=true, size=..., iname=...) at ../src/asgen/handlers/iconhandler.d:196 #6 0x006ea75a in _D5asgen8handlers11iconhandler11IconHandler21possibleIconFilenamesMFAyaSQCs5utils9ImageSizebZ9__lambda4MFZv (this=0x7fde0752bd00) at ../src/asgen/handlers/iconhandler.d:392 #7 0x0082fdfa in core.thread.Fiber.run() (this=0x7fde07528580) at src/core/thread.d:4436 #8 0x0082fd5d in fiber_entryPoint () at src/core/thread.d:3665 #9 0x in () ``` This is in the constructor of a std.concurrency.Generator: auto gen = new Generator!string (...) I am not sure what to make of this yet though... This goes into DRuntime territory that I actually hoped to never have to deal with as much as I apparently need to now.
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 22:12:12 UTC, kinke wrote: On Wednesday, 18 April 2018 at 20:36:03 UTC, Johannes Pfau wrote: Actually this sounds very familiar: https://github.com/D-Programming-GDC/GDC/pull/236 Interesting, but I don't think it applies here. Both start and end addresses are 16-bytes aligned, and both cannot be accessed according to the stack trace (`pbot=0x7fcf4d721010 Cannot access memory at address 0x7fcf4d721010>, ptop=0x7fcf4e321010 0x7fcf4e321010>`). That's quite interesting too: `memSize = 209153867776`. Don't know what exactly it is, but it's a pretty large number (~194 GB). size_t memSize = pooltable.maxAddr - minAddr; (https://github.com/ldc-developers/druntime/blob/ldc/src/gc/impl/conservative/gc.d#L1982 ) That wouldn't make sense for a pool size... The machine this is running on has 16G memory, at the time of the crash the software was using ~2.1G memory, with 130G virtual memory due to LMDB memory mapping (I wonder what happens if I reduce that...)
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 20:36:03 UTC, Johannes Pfau wrote: [...] Actually this sounds very familiar: https://github.com/D-Programming-GDC/GDC/pull/236 it took us quite some time to reduce and debug this: https://github.com/D-Programming-GDC/GDC/pull/236/commits/ 5021b8d031fcacac52ee43d83508a5d2856606cd So I wondered why I couldn't find this in the upstream druntime code. Turns out our pull request has never been merged https://github.com/dlang/druntime/pull/1678 Just to be sure, I applied your patch, but unfortunately I still get the same result... On Wednesday, 18 April 2018 at 20:38:20 UTC, negi wrote: On Monday, 16 April 2018 at 16:36:48 UTC, Matthias Klumpp wrote: ... This reminds me of (otherwise unrelated) problems I had involving Linux 4.15. If you feel out of ideas, I suggest you take a look at the kernels. It might be that Ubuntu is turning some security-related knob in a different direction than Debian. Or it might be some bug in 4.15 (I found it to be quite buggy, specially during the first few point releases; 4.15 was the first upstream release including large amounts of meltdown/spectre-related work). All the crashes are happening on a 4.4 kernel though... I am currently pondering digging out a 4.4 kernel here to see if that makes me reproduce the crash locally.
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 20:40:52 UTC, Matthias Klumpp wrote: [...] If possible, I'd give static linking a try. I tried that, with at least linking druntime and phobos statically. I did not, however, link all the things statically. That is something to try (at least statically linking all the D libraries). No luck... ``` #0 0x007f10e8 in _D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv (this=..., ptop=0x7fcf6a11b010, pbot=0x7fcf6951b010) at src/gc/impl/conservative/gc.d:1990 p1 = 0x7fcf6951b010 p2 = 0x7fcf6a11b010 stackPos = 0 stack = {{pbot = 0x7fffcc60, ptop = 0x7f15af <_D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv+1403>}, {pbot = 0xc22bf0 <_D2gc6configQhSQnQm6Config>, ptop = 0xc4cd28}, {pbot = 0x87b4118, ptop = 0x87b4118}, {pbot = 0x0, ptop = 0xc4cda0}, {pbot = 0x7fffcca0, ptop = 0x7f15af <_D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv+1403>}, {pbot = 0xc22bf0 <_D2gc6configQhSQnQm6Config>, ptop = 0xc4cd28}, {pbot = 0x87af258, ptop = 0x87af258}, {pbot = 0x0, ptop = 0xc4cda0}, {pbot = 0x7fffcce0, ptop = 0x7f15af <_D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv+1403>}, {pbot = 0xc22bf0 <_D2gc6configQhSQnQm6Config>, ptop = 0xc4cd28}, {pbot = 0x87af158, ptop = 0x87af158}, {pbot = 0x0, ptop = 0xc4cda0}, {pbot = 0x7fffcd20, ptop = 0x7f15af <_D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv+1403>}, {pbot = 0xc22bf0 <_D2gc6configQhSQnQm6Config>, ptop = 0xc4cd28}, {pbot = 0x87af0d8, ptop = 0x87af0d8}, {pbot = 0x0, ptop = 0xc4cda0}, {pbot = 0x7fdf6b265000, ptop = 0x69b96a0}, {pbot = 0x28, ptop = 0x7fcf5951b000}, {pbot = 0x309eab7000, ptop = 0x7fdf6b265000}, {pbot = 0x0, ptop = 0x0}, {pbot = 0x1381d00, ptop = 0x1c}, {pbot = 0x1d, ptop = 0x1c}, {pbot = 0x1a44100, ptop = 0x1a4410}, {pbot = 0x1a44, ptop = 0x4}, {pbot = 0x7fdf6b355000, ptop = 0x69b96a0}, {pbot = 0x28, ptop = 0x7fcf5951b000}, {pbot = 0x309eab7000, ptop = 0x4ac0}, {pbot = 0x4a, ptop = 0x0}, {pbot = 0x1381d00, ptop = 0x1c}, {pbot = 0x1d, ptop = 0x1c}, {pbot = 0x4ac00, ptop = 0x4ac0}, {pbot = 0x4a, ptop = 0x4}} pcache = 0 pools = 0x69b96a0 highpool = 40 minAddr = 0x7fcf5951b000 memSize = 208820465664 base = 0xaef0 top = 0xae p = 0x4618770 pool = 0x0 low = 110859936 high = 40 mid = 140528533483520 offset = 208820465664 biti = 8329709 pn = 142275872 bin = 1 offsetBase = 0 next = 0xc4cc80 next = {pbot = 0x7fffcbe0, ptop = 0x7f19ed <_D2gc4impl12conservativeQw3Gcx7markAllMFNbbZ14__foreachbody3MFNbKSQCm11gcinterface5RangeZi+57>} __r292 = 0x7fffd320 __key293 = 8376632 rng = @0x0: #1 0x007f19ed in _D2gc4impl12conservativeQw3Gcx7markAllMFNbbZ14__foreachbody3MFNbKSQCm11gcinterface5RangeZi (this=0x7fffd360, __applyArg0=...) at src/gc/impl/conservative/gc.d:2188 range = {pbot = 0x7fcf6951b010, ptop = 0x7fcf6a11b010, ti = 0x0} #2 0x007fd161 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf7opApplyMFNbMDFNbKQBtZiZ9__lambda2MFNbKxSQCpQCpQCfZi (this=0x7fffd320, e=...) at src/rt/util/container/treap.d:47 #3 0x007fd539 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (dg=..., node=0x80396c0) at src/rt/util/container/treap.d:221 result = 0 #4 0x007fd565 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (dg=..., node=0x87c8140) at src/rt/util/container/treap.d:224 result = 0 #5 0x007fd516 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (dg=..., node=0x7fdfc8000950) at src/rt/util/container/treap.d:218 result = 16844032 #6 0x007fd516 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (dg=..., node=0x7fdfc8000a50) at src/rt/util/container/treap.d:218 result = 0 #7 0x007fd516 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (dg=..., node=0x7fdfc8000c50) at src/rt/util/container/treap.d:218 result = 0 [etc...] #37 0x0077e889 in core.memory.GC.collect() () at src/core/memory.d:207 #38 0x006b4791 in asgen.engine.Engine.gcCollect() (this=0x77ee13c0) at ../src/asgen/engine.d:122 ```
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 18:55:48 UTC, kinke wrote: On Wednesday, 18 April 2018 at 10:15:49 UTC, Kagamin wrote: There's a number of debugging options for GC, though not sure which ones are enabled in default debug build of druntime Speaking for LDC, none are, they all need to be enabled explicitly. There's a whole bunch of them (https://github.com/dlang/druntime/blob/master/src/gc/impl/conservative/gc.d#L20-L31), so enabling most of them would surely help in tracking this down, but it's most likely still going to be very tedious. I'm not really surprised that there are compilation errors when enabling the debug options, that's a likely fate of untested code unfortunately. Yeah... Maybe making a CI build with "enable all the things" makes sense to combat that... If possible, I'd give static linking a try. I tried that, with at least linking druntime and phobos statically. I did not, however, link all the things statically. That is something to try (at least statically linking all the D libraries).
Re: Issues with debugging GC-related crashes #2
On Wednesday, 18 April 2018 at 10:15:49 UTC, Kagamin wrote: You can call GC.collect at some points in the program to see if they can trigger the crash I already do that, and indeed I get crashes. I could throw those calls into every function though, or make a minimal pool size, maybe that yields something... https://dlang.org/library/core/memory/gc.collect.html If you link against debug druntime, GC can check invariants for correctness of its structures. There's a number of debugging options for GC, though not sure which ones are enabled in default debug build of druntime: https://github.com/ldc-developers/druntime/blob/ldc/src/gc/impl/conservative/gc.d#L1388 I get compile errors for the INVARIANT option, and I don't actually know how to deal with those properly: ``` src/gc/impl/conservative/gc.d(1396): Error: shared mutable method core.internal.spinlock.SpinLock.lock is not callable using a shared const object src/gc/impl/conservative/gc.d(1396):Consider adding const or inout to core.internal.spinlock.SpinLock.lock src/gc/impl/conservative/gc.d(1403): Error: shared mutable method core.internal.spinlock.SpinLock.unlock is not callable using a shared const object src/gc/impl/conservative/gc.d(1403):Consider adding const or inout to core.internal.spinlock.SpinLock.unlock ``` Commenting out the locks (eww!!) yields no change in behavior though. The crashes always appear in https://github.com/dlang/druntime/blob/master/src/gc/impl/conservative/gc.d#L1990 Meanwhile, I also tried to reproduce the crash locally in a chroot, with no result. All libraries used between the machine where the crashes occur and my local machine were 100% identical, the only differences I am aware of are obviously the hardware (AWS cloud vs. home workstation) and the Linux kernel (4.4.0 vs 4.15.0) The crash happens when built with LDC or DMD, that doesn't influence the result. Copying over a binary from the working machine to the crashing one also results in the same errors. I am completely out of ideas here. Since I think I can rule out a hardware fault at Amazon, I don't even know what else would make sense to try.
Re: Issues with debugging GC-related crashes #2
On Tuesday, 17 April 2018 at 08:23:07 UTC, Kagamin wrote: Other stuff to try: 1. run application compiled on debian against ubuntu libs 2. can you mix dependencies from debian and ubuntu? I haven't tried that yet (next on my todo list), if I do run the program compiled with address sanitizer on Debian, I do get errors like: ``` AddressSanitizer:DEADLYSIGNAL = ==25964==ERROR: AddressSanitizer: SEGV on unknown address 0x7fac8db3f800 (pc 0x7fac9c433430 bp 0x0008 sp 0x7ffc92be3dd0 T0) ==25964==The signal is caused by a READ memory access. #0 0x7fac9c43342f in _D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xa142f) #1 0x7fac9c433a2f in _D2gc4impl12conservativeQw3Gcx7markAllMFNbbZ14__foreachbody3MFNbKSQCm11gcinterface5RangeZi (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xa1a2f) #2 0x7fac9c459ad4 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xc7ad4) #3 0x7fac9c459ac6 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xc7ac6) #4 0x7fac9c459ac6 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xc7ac6) #5 0x7fac9c459ac6 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf13opApplyHelperFNbxPSQDeQDeQDcQCv__TQCsTQCpZQDa4NodeMDFNbKxSQDiQDiQCyZiZi (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xc7ac6) #6 0x7fac9c459a51 in _D2rt4util9container5treap__T5TreapTS2gc11gcinterface5RangeZQBf7opApplyMFNbMDFNbKQBtZiZi (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xc7a51) #7 0x7fac9c430f26 in _D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0x9ef26) #8 0x7fac9c431226 in _D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs18fullCollectNoStackMFNbZ2goFNbPSQEaQEaQDyQEj3GcxZmTQvZQDfMFNbKQBgZm (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0x9f226) #9 0x7fac9c4355d0 in gc_term (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xa35d0) #10 0x7fac9c443ab2 in rt_term (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xb1ab2) #11 0x7fac9c443e65 in _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xb1e65) #12 0x7fac9c443d0b in _d_run_main (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xb1d0b) #13 0x7fac9b9cfa86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21a86) #14 0x55acd1dbe1d9 in _start (/home/matthias/Development/AppStream/generator/build/src/asgen/appstream-generator+0xba1d9) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/usr/lib/x86_64-linux-gnu/libdruntime-ldc-shared.so.78+0xa142f) in _D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv ==25964==ABORTING ``` So, I don't think this bug is actually limited to Ubuntu, it just shows up there more often for some reason.
Re: Issues with debugging GC-related crashes #2
On Monday, 16 April 2018 at 16:36:48 UTC, Matthias Klumpp wrote: [...] The code uses std.typecons.scoped occasionally, does no GC allocations in destructors and does nothing to mess with the GC in general. There are a few calls to GC.add/removeRoot in the gir-to-d generated code (ObjectG.d), but those are very unlikely to cause issues (removing them did yield the same crash, and the same code is used by more projects). [...] Another thing to mention is that the software uses LMDB[1] and mmaps huge amounts of data into memory (gigabyte range). Not sure if that information is relevant at all though. [1]: https://symas.com/lmdb/technical/
Re: Want to start a graphical Hello world.
On Sunday, 21 January 2018 at 04:16:10 UTC, MHE wrote: [...] For this i have made a folder named GIT in the linux directory /usr/local/GIT And Gitcloned in the GIT folder : git clone https://github.com/nomad-software/x11 git clone https://github.com/nomad-software/tcltk.git The clones are now in /usr/local/GIT/ Hmm, but you are using GtkD in your example... It looks like you compiled the wrong code. What you actually want is GtkD from https://github.com/gtkd-developers/GtkD or installed from the Debian repositories (in case you want to use LDC) via `sudo apt install libgtkd-3-dev` When i start the upper code in a cmd the result is as follows : ___ $ dmd hw_graphical.d hw_graphical.d(1): Error: module MainWindow is in file 'gtk/MainWindow.d' which cannot be read import path[0] = /usr/include/dmd/phobos import path[1] = /usr/include/dmd/druntime/import What must be done that the code make a graphical window ? If you want to use GtkD and achieve a quick result, you might want to use the Dub package manager. GtkD also has a few demo applications that show you how to use it with dub, take a look at https://github.com/gtkd-developers/GtkD/tree/master/demos/gtk I want to make the D programming tkd module ready for GUI programming. Need a step by step advice for BEGINNERS in D programming. tkd is a project different from GtkD (which your example above is using). For "modern" UI that integrates well with GNOME, you will highly likely want to use GtkD. Tcl/Tk is of course an option as well, but the D bindings look less complete and well maintained. In case you want to use Tk, you will need to change your code to actually use it, instead of GTK+ though ;-) Any Internetlink for step by step instructions how to arrange D and TK would be helpful ! I can't help with Tk, but for GTK+ a quick Google search found https://sites.google.com/site/gtkdtutorial/ - this tutorial is rather old, but it might be useful as a reference. This experience report by Gerald Nunn might also be an interesting read for you: https://gexperts.com/wp/learning-d-and-gtk/ Cheers, Matthias
Re: gdc is in
On Sunday, 8 October 2017 at 08:38:15 UTC, Iain Buclaw wrote: On 7 October 2017 at 19:42, Nordlöw via Digitalmars-d wrote: On Friday, 6 October 2017 at 15:21:05 UTC, jmh530 wrote: I would think this would be bigger news...I mean LDC isn't even on 2.076 yet... I very much so agree. This is fantastic news! Are there any beta (alpha|beta|rc)-builds available for download? No, but then again the poor little server is so constrained on resources at the moment, I'll be wanting to move to a new provider (replacing the single server with four) before such a thing happens. I've had to add more swap because CI builds for mips, ppc and sparc started failing due to hitting the oomkiller. Donating for the upkeep of our infrastructure is also welcome. ;-) This might be worthwhile. It is highly likely that when GDC is in GCC it will become the default D compiler in Red Hat Enterprise Linux (while real support in RHEL even maybe), other Linux distros like SUSE and even that we'll use it as the default D compiler in Debian, due to GCC being the default system compiler with strong optimizations and very good support for multiple architectures (not sure yet though, since LDC works well as well when it's not hit by the (suprisingly frequent) LLVM bugs). So, in any case, GDC in GCC is *huge* and there is a high chance that people will come in contact with D through it first. When it's in, I can try to get GDC a few more machines for CI (I can't promise anything at the moment, unfortunately).
Re: D and Meson
On Monday, 19 June 2017 at 12:21:24 UTC, Mike B Johnson wrote: [...] Funny: "The main design point of Meson is that every moment a developer spends writing or debugging build definitions is a second wasted. So is every second spent waiting for the build system to actually start compiling code." Which is in direct contradiction to what Walter has said... and yet Walter is suppose to be all about fast cars and hot women. Walter said you should invest a lot of time waiting for the build process? :P On Wednesday, 14 June 2017 at 15:25:55 UTC, Russel Winder wrote: [...] If the person running the D support for Meson is on this list please contact me privately to tell me what I can do to help progress that support further. Any changes that upstream Meson is happy with are fine :-) The best thing to help with improving Meson support one can possibly do at the moment would be fixing this DMDFE feature request: https://issues.dlang.org/show_bug.cgi?id=16746 This will also benefit a lot of other build systems that use incremental builds.
Re: dmd debian installation conflicts with debian-goodies
On Wednesday, 28 June 2017 at 10:09:06 UTC, Ralph Amissah wrote: Installing dmd if debian-goodies is installed fails. Both try to write a file named '/usr/bin/dman' Debian Stretch is out, the freeze is over, perhaps now dmd will soon be available as a package in Debian? Ldc2 does a great job but for testing purposes and convenience it would be good to have the reference compiler. Long-term, we will likely be using GDC in Debian as default D compiler, if that becomes viable. That GDC is in GCC now is a very big deal, which makes maintaining D in Debian and any Linux distribution (which uses GCC as system compiler) much easier. Also, there is some company interest now, since it is expected that GCC/GDC will hit enterprise distributions such as RHEL as well, and thereby be widely available. That being said, I want DMD to be available in Debian, and LDC is doing a very good job at the moment and is serving as our de-facto default D compiler. Unfortunately now that the dman binary name is taken, DMD can't have it in Debian and that binary would have to be renamed, even if just temporarily in case we could convince the -goodies maintainer to change the name of the existing binary. Is there likely to be D related activity at DebCamp and DebConf 2017, Montreal? Nothing is planned yet, but if there is interest in it, I would be happy to organize a BoF session there. Cheers, Matthias
Re: [OT] Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 18:14:40 UTC, David Nadlinger wrote: On Tuesday, 11 April 2017 at 12:03:27 UTC, Matthias Klumpp wrote: On Monday, 10 April 2017 at 22:15:53 UTC, David Nadlinger wrote: So do we need to put a reminder about the ABI being unstable into set of every release notes to make sure we won't get angry bug reports once users actually build their own D code against your packages? ;) Nah, there are several options here, one would simply be to tell people not to use the distro packages with anything but the default D compiler used in the respective Debian release. So as long as one sticks to packages in the official apt repos, all the libraries are guaranteed to be built with the distributed compiler as well? Yes. Unfortunately there will be three of them which aren't compatible with each other, so we will kind of have to settle with one as default. When you mentioned that you'd read the release notes regarding the ABI change, I got the impression that you had to manually rebuild the world for that to happen – hence my tongue-in-cheek remark about reminding you to do this in the release notes. Well, it's a matter of telling the build admins or making a proper transition package (doesn't exit yet for D), but yeah, technically we'd need to rebuild all D stuff on ABI changes.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 15:31:46 UTC, David Nadlinger wrote: On Tuesday, 11 April 2017 at 12:38:01 UTC, Matthias Klumpp wrote: If you could change the SOVERSION with every one of these changes, or simply just tie it to the respective Phobos release, distributions would automatically do the right thing and compile all D code using Phobos against the new version. As you mention, this is already done in LDC; not just the Debian packages, but also upstream. The soname will always be `libphobos2-ldc.so.74` or what have you. (Thinking about it, it should probably include the patch version as well, although we haven't had a situation so far where we would have wanted to release multiple LDC versions for ABI-incompatible patch releases.) Phobos is versioned properly in both GDC and LDC, and as long as that continues, no problems exist at all :-) https://packages.debian.org/search?suite=stretch&keywords=phobos This would - as said - also work for other D shared libraries, unless the D ABI in general breaks or someone tries to build a program with GDC and uses a library that was built with LDC or DMD (or any other possible compiler combination).
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 14:43:15 UTC, Russel Winder wrote: On Tue, 2017-04-11 at 14:21 +, Matthias Klumpp via Digitalmars-d wrote: […] At time I am playing around with the idea of using pkg-config[1] files to enlist the sources a D library consists of. By doing that, we would have a very build-system agnostic way of doing storing that information that can be used by Automake (native), Dub, Meson and even plain Makefiles. It's also very widely used (although it is commonly not used to list sources). And SCons. What about CMake? (Which is the CLion build system using Make underneath, but they are rapidly moving to supporting Ninja.) CMake supports this as well: https://cmake.org/cmake/help/v3.0/module/FindPkgConfig.html The current idea is in case a library "foobar" would be packaged, to have a "foobar-src.pc" pkgconfig file (additionally to a potentially also existing "foobar.pc" file), containing something like this: ``` prefix=/usr/local exec_prefix=${prefix} includedir=${prefix}/include/d/foo Name: foobar Description: The foo library (sources) Version: 1.0.0 Cflags: -I${includedir} -d-version=blahblub Sources: ${includedir}/alpha.d ${includedir}/beta.d ${includedir}/gamma.d ``` Build systems would then need to explicitly fetch the Sources field and add its expanded values to the project's sources. (Using Cflags for this would be messy, as the build system might want to deal with flags and sources separately). Alternatively dub could also define a layout and we write plugins for each build-system to make it work. This will be really annoying with large libraries like GtkD though, which will require substantially longer to build. Maybe it's worth keeping some libraries precompiled (thereby locking all their reverse-dependencies to whatever D compiler was used to compile the library). One problem with the pkg-config approach is that to support the precompiled case too, build systems would need to search for both "foobar" and "foobar-src" and only fail dependency detection if both of them are missing. At least with CMake/Meson that's easy to do though (although it's a bit messy).
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 14:49:03 UTC, Russel Winder wrote: [...] Having played a bit with GtkD, you always want this as a shared library for development. Yeah, GtkD is pretty massive and takes quite a large amount of time to compile... Redoing that for each software depending on it is pretty wasteful.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 14:26:37 UTC, rikki cattermole wrote: [...] The problem with /usr/include/d is that is where .di files would be located not .d. This would also match up with the c/c++ usage of it. When I asked about this a while back, I was told to just install the sources into the include D as "almost nobody uses .di files except for proprietary libraries" (and do those even exist?). But in any case, any path would be fine with me as long as people can settle on using it - `/usr/share/d` would be available ^^
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 14:04:44 UTC, rikki cattermole wrote: [...] /usr/share/source/D/package-name-version Add a search path like that to Dub and create source only library packages and that is pretty much all the distribution we need for libraries I reckon. It's more likely that the path would be `/usr/include/d` because that's the place the D sources seem to usually reside, especially if they are also used as "headers" for shared libraries. At time I am playing around with the idea of using pkg-config[1] files to enlist the sources a D library consists of. By doing that, we would have a very build-system agnostic way of doing storing that information that can be used by Automake (native), Dub, Meson and even plain Makefiles. It's also very widely used (although it is commonly not used to list sources). This would also jive really well with the existing D packages in Debian and we could - if we go down the source-only route - still decide to precompile a few selected libraries, for example in case they are very time consuming to compile or really large. This will *not* solve the issues with Phobos breakage though, as Phobos is a shared library. But doing that would pretty much work around all other problems, except for the cost of having statically linked binaries, but that seems to be inevitable anyway given the state of things. (Disclaimer: This is not a D team policy draft yet, just an opinion at the moment that might be refined or changed entirely) [1]: https://www.freedesktop.org/wiki/Software/pkg-config/
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 13:43:38 UTC, Jacob Carlborg wrote: On 2017-04-11 02:47, Jonathan M Davis via Digitalmars-d wrote: Honestly, I don't see how it really makes much sense to use shared libraries with D except in cases where you have no choice. The lack of ABI compatibility makes them almost useless. Also, what are we even looking to distribute in debian? I would have thought that the normal thing to do would be to build with dub, in which case, having the compiler and dub be debian packages makes sense but not really anything else. If you're looking to package an application that was written in D, then that becomes another question, but then if you just statically link it, the ABI compatibility problem goes away as does any need to package any D library dependencies. I agree, I don't see any point in distributing libraries. just applications. But I do know some people will refuse to install anything that doesn't come through the system package manager. Every single bit of software that is available in the distribution needs to be packaged in it so you can replicate its build using only what is available in the distro. Fetching things from the internet is not allowed. (That's actually a hard no in the policy, while other things like the use of static linking are rather "you shouldn't do it if you can avoid it") The fundamental thing a distribution does is integrating software and creating a consistent whole out of many moving parts. In order to do that, you absolutely can not rely on site-specific package managers like pip, dub, npm, etc. as they are not built for that purpose and only see their own ecosystem. You could still build with them though in case they integrate well with the distro.
Re: [OT] Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 12:42:13 UTC, Russel Winder wrote: On Tue, 2017-04-11 at 12:03 +, Matthias Klumpp via Digitalmars-d wrote: […] Nah, there are several options here, one would simply be to tell people not to use the distro packages with anything but the default D compiler used in the respective Debian release. Go apparently tells people not to use Debian-shipped go code in their own projects at all. The vendoring systems that Go folk have invented are effectively mandatory for projects that want reproducible builds, and using platform specific code is not feasible. It suprises me that Debian and Fedora are going flat out trying to package Go stuff. That's false. Debian is leading the effort on reproducible builds that many other projects (including Fedora) have joined, and a large chunk of packages is already reproducible[1]. It's actually quite the opposite: Build systems downloading random stuff from the internet make the system more likely to produce different build results. But in any case, the primary use for Debian packages is to be used by the distribution. [1]: https://tests.reproducible-builds.org/debian/reproducible.html
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Tuesday, 11 April 2017 at 00:47:34 UTC, Jonathan M Davis wrote: On Monday, April 10, 2017 23:08:17 David Nadlinger via [...] Also, what are we even looking to distribute in debian? I would have thought that the normal thing to do would be to build with dub, in which case, having the compiler and dub be debian packages makes sense but not really anything else. If you're looking to package an application that was written in D, then that becomes another question, but then if you just statically link it, the ABI compatibility problem goes away as does any need to package any D library dependencies. You will have static-library packages which have the exact same ABI issues shared libraries have. And yeah, this is obviously about stuff being built with D compilers in the distro, such as Tilix, BioD, AppStream Generator and all future things which might emerge and be useful to have in the OS.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 23:43:04 UTC, David Nadlinger wrote: On Monday, 10 April 2017 at 23:27:35 UTC, Walter Bright wrote: The next problem is that dmd occasionally changes the interface to the D runtime. […] I also do not know how the gdc/lds druntime interfaces differ. Just to make this very clear to everybody reading this thread: It's not even just that, but also the fact that we guarantee API-, but not ABI-stability for Phobos. Every time we continue to improve the pure/nothrow/@nogc situation by marking up some more code, we are breaking the ABI, because the mangled names of the involved symbols change. The ongoing work on `scope` also breaks the ABI when enabled. If you could change the SOVERSION with every one of these changes, or simply just tie it to the respective Phobos release, distributions would automatically do the right thing and compile all D code using Phobos against the new version. This might give Phobos a large soversion or and ugly one like "2.074", but it would work. (LDC's Phobos and GDC's Phobos already have a soversion set in Debian...)
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 23:33:17 UTC, Walter Bright wrote: On 4/10/2017 6:08 AM, Matthias Klumpp wrote: I also want to stress that having a single C++ library like Boost compiled into stuff and rolling dependency transitions when its API/ABI changes with a major release is less of a problem than having the entire language give zero stability and interoperability guarantees on anything that is compiled with it. How is the g++/clang++ issue handled? The C ABI is 100% compatible, the C++ ABI is "mostly" compatible, there is some deliberate breakage from the Clang guys though. The issue isn't actually handled in Debian as all our code is always compiled with GCC, I am not aware of anything defaulting to Clang (although it might exist, but definitely not for library packages). With only one dominant compiler, things are way easier ^^
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 23:27:35 UTC, Walter Bright wrote: On 4/10/2017 5:59 AM, Matthias Klumpp wrote: You need to see here that D is not the center of the world and we will need to make it work nicely with the rest of the system. The technical policies work for everything else, so there is nothing that really justifies an exception for D here (if 10% of Debian's code was written in D and the Debian D team was really large we could maybe get one, but not the way it is now). And tbh, I think finding a good solution here is entirely possible. I think it is possible, too, and thank you for your efforts helping us do this. The ABI differences between the compilers is unfortunate, and is largely the result of historical accident. The first problem is that my idea originally was for D to have its own function calling convention, which would free us to innovate to have a more efficient calling convention than the C one. This hasn't panned out in practice, and ldc/gdc decided to sensibly stick with the C ABI. At some point, we should just crowbar dmd to generate the C ABI, but this has its own problems - it'll break code that uses the inline assembler. No obvious solution there. The next problem is that dmd occasionally changes the interface to the D runtime. Or more accurately, with about every release. This has not been an issue historically for us, as the two have always been a matched set. I'm a lot less sure how to deal with this. I also do not know how the gdc/lds druntime interfaces differ. Thank you very much for this context! It's really good to know why things are the way they are to properly understand the problem (I am no compiler developer and ABIs are not my expertise - as outsider to this I see pages like https://dlang.org/spec/abi.html though and wonder whether the incompatibilities are seen as an issue in the D community and whether there is a chance to address them before putting work into setting up infrastructure to rebuild the world on compiler updates).
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 23:08:17 UTC, David Nadlinger wrote: On Monday, 10 April 2017 at 17:27:28 UTC, Matthias Klumpp wrote: That's why I have been writing a lot of Makefiles and Meson build definitions lately. It seems like doing so without having a closer look at the realities of D software (no stable ABI, etc.) might not have been the best use of your time. How would Dub be such fundamentally unfixable that making the official language tools play nice with distro packaging (and the other way round) wouldn't be preferable over manually re-writing build systems all over the place? Dub is not unfixable, but there are many more issues with it which make it very very hard to use in a distribution context - I tried that and it didn't work well. I am treating D in Debian like I would treat C++, which is quite fair given the similar featuresets and challenges. But with an unstable ABI the standard library is also affected, which would trigger us to do Haskell-style versioning (which mangles dependencies to depend on a virtual package containing a hash of the GHC version), and that not only sucks but also requires quite a lot of manpower. So it sounds like there is a solution already for other languages. Could you elaborate some more on the problems with it? I suppose there is some wiki page documenting the process somewhere? Haskell and OCaml permanently rebuild the whole stack on every new compiler release, which is why they have permanent transition trackers[1], so they basically continuously rebuild. I want to avoid this at all cost for D, as this is very very maintenance intensive and painful, and will require much more people to work on it than D has available in Debian. [1]: https://release.debian.org/transitions/ Rust only has one compiler which strongly optimizes, so we don't have the problem of choosing the right one. Cargo is/was an issue but it's being worked on and seems to work well now: https://wiki.debian.org/Teams/RustPackaging/Cargo Rust doesn't have a stable ABI either, and it doesn't look like there is any movement in that direction (not that I think that there should be). That the people driving the effort might not be aware of it yet doesn't mean it isn't an issue for them. It's being worked on[2], but it's not a super-high priority. There doesn't seem to be a definitive answer on how Rust is handled in Debian yet (but to know for certain, I would need to ask the Rust team). [2]: https://github.com/rust-lang/rfcs/issues/600 IIRC OCaml is also very much a statically linked affair. And how does Debian distribute Go binaries? Is there any issue with those being linked statically? If not, let's just distribute D libraries as source and compile/link them statically when building binaries, and problem solved. Surprisingly it looks like many Go packages are indeed provided as source-installations. Doing something like this with D would require Makefiles and Dub to pick up sources from system locations properly which isn't really done yet... Some of the compiler developers, myself included, understand the issues involving ABI stability and distro packaging quite well (although the latter admittedly only on a general level). In fact, one of my earliest open source memories is of some work in the trenches ensuring ABI stability of some bits of KDE across releases. Yet we are still going to tell you that the D ABI is going to remain unstable for the foreseeable future. This is not something that just requires a man-week or month to "fix" in the compiler, but would impact many other areas as well, for example language evolution. If you somehow got the impression that this is just due to D developers "not getting it", just have a look at the other recent compiled languages. Go and Rust don't fare any differently, and even Swift, with all its development manpower, doesn't have a stable ABI yet [1]. And I believe header-only C++ template libraries have been mentioned already as well. I can only speak for myself, of course, but I certainly see the strategic importance of integration into the Linux distribution ecosystem for D, and I'm very happy to work with packagers wherever possible. However, you also need to acknowledge the properties of the ecosystem *you* are working with. If you see a big stretch of difficult terrain in front of you, closing your eyes won't make it go away; you'll only lose time you could spend working around it. ABI instability is something you'll have to work around one way or the other. That's the whole point of this thread, I want to find the best solution to deal with this issue and also one the D community can live with. I am not set on any particular solution yet, at the moment I see the problem and I am thinking about how to deal with it in the best
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 22:36:39 UTC, Iain Buclaw wrote: On 10 April 2017 at 23:52, David Nadlinger via Digitalmars-d wrote: On Monday, 10 April 2017 at 20:43:06 UTC, Iain Buclaw wrote: Master sports Phobos 2.071. Someone will have to see whether latter versions can be built using it. … and some weird Frankensteinian mix of several frontend versions, I take it, maybe enough to build Phobos, but not necessarily compatible for user code? Or did you port all the changes since 2.068.2 back to C++? — David All the regression fixes and none of the bugs! The current situation is that it should be link-compatible with current upstream/stable. Enough so that when someone has the time to test, it should just be a case switching the sources and building the D version. First of all, thank you for your tremendous work on GDC! Fellow developers and me were also pretty stunned by you maintaining a quite large amount of different GDC versions in parallel without a huge team - that's some impressive work! What is the thing that's blocking GDCs GCC inclusion? Just manpower? Also, you were talking about "bugs" on several occasions, what's the thing with that? Is it GCC or general Phobos bugs? It would probably be awesome to have a summary blogpost or similar on the state of GDC, that could potentially also attract volunteers. Anyway, all a bit off-topic :-)
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 22:26:46 UTC, Joseph Rushton Wakeling wrote: On Monday, 10 April 2017 at 13:20:00 UTC, Matthias Klumpp wrote: This has worked nicely for every language. If you don't have templates in your API or don't change the templates between releases, you can survive with one library for a long time. But the vast majority of D libraries _do_ have templates (starting with Phobos). How should this situation be dealt with? How does Debian deal with, e.g., fixes to the templated code in Boost, which impact on other packages built using those header-only libraries? Boost's soversion is changed on every release, and the version is included in it's -dev package as well. That's why we have libboost1.62-dev: https://packages.debian.org/de/sid/libboost1.62-dev (and possibly more). There is also a boost-defaults package setting the current default Boos version for packages to depend on. If a new Boost comes out, it's soversion and -dev package name changes, triggering a package transition and subsequently a full rebuild of all stuff depending on Boost. Doing something like this with D libraries would obviously be possible as well.
Re: [OT] Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 22:15:53 UTC, David Nadlinger wrote: On Monday, 10 April 2017 at 17:50:08 UTC, Matthias Klumpp wrote: I am reading release notes, so we rebuilt dependencies of LDC - (I assume you mean reverse dependencies.) […] But since no bugs were reported, I assume no issues are present :-) So do we need to put a reminder about the ABI being unstable into set of every release notes to make sure we won't get angry bug reports once users actually build their own D code against your packages? ;) Nah, there are several options here, one would simply be to tell people not to use the distro packages with anything but the default D compiler used in the respective Debian release. Go apparently tells people not to use Debian-shipped go code in their own projects at all.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 20:11:28 UTC, Wyatt wrote: On Monday, 10 April 2017 at 18:46:31 UTC, H. S. Teoh wrote: Hmm. I guess there's no easy way to make dmd/ldc emit dependencies with modified SONAMEs? So yeah, you're right, every software that depends on said libraries would have to explicitly depend on a different SONAME depending on what they were built with. OK, crazy idea, nevermind. :-( Doesn't sounds that crazy; you already do it with GCC versions, right? (Debian _does_ have something like that, right? Where you can pick your C compiler.) Yes, that's why all packages need to honor the CFLAGS/CC env var somehow or get the default C compiler from dpkg at build time, so we can easily apply new C flags globally, and projects to build the distro with Clang work. We do never change the SONAME of anything, however. We do track symbols and might change the SOVERSION occasionally if breakage is found (making 3 -> 3a etc.), but the soname isn't changed. The good thing is that Clang and GCC are (with few exceptions) very compatible.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
I apologize in advance for the large amount of mail that will likely follow, but I want to address all comments. On Monday, 10 April 2017 at 18:46:31 UTC, H. S. Teoh wrote: [...] One issue, though: if we standardize on compiling Debian packages with ldc, then what do we do with libraries that are not ABI-compatible with dmd? Since users would expect that if they need libfoo, they'd just `apt-get install libfoo-dev` and then they should be able to just run` dmd -L-lfoo` and it should all magically "just work". That's the problem I would like to see addressed (but given Walter's comment, it won't be feasible to resolve it in the near future). We could simply do away with "don't use distro packages for your D programming", at least that's what Go recommends.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 17:29:04 UTC, H. S. Teoh wrote: On Mon, Apr 10, 2017 at 11:40:12AM +, Matthias Klumpp via Digitalmars-d wrote: [...] [...] If we do that, we will run into the D ABI trap: Libraries compiled with compiler X can not be used from software compiled with D compiler Y. There is actually no ABI stability guarantee even between DMD releases. This will make integrating D a huge pain. Recompiling the dependency-chain of a software from source when compiling a package using the "right" compiler and statically adding the code is forbidden by distro policy. Having static libraries in the dependencies doesn't solve the issue. Compiling each library with all D compilers is highly impractical and not really feasible. This is not a hard problem to solve, IMO. Just build the library into two separate binaries, each with a SONAME that encodes the ABI it is compatible with. This would require to hack each and every build system to support this *and* if there is a pkgconfig file for the shared library, to change all depending software to check for multiple library names which is a bit crazy... The resulting two .so's can either be distributed as separate packages (for minimum bloat, if that's a concern), or as a single package that contains both binaries (since they have different SONAMEs this should not be a problem). Since one library == one package in Debian, it would have to be multiple packages, otherwise we would need to override Lintian errors/warnings, which is always a bad idea. Then if you compile some software X that depends on this library, it will pick up the correct version of the library depending on which compiler you compiled with. Unfortunately not without special support in the software's build system :-/ [...] DMD is unlikely to support other archs than amd64/ia32 in the foreseeable future, so the justification for dmd being unavailable for arch X would be that upstream DMD simply doesn't support it. This, however, should not prevent us from using gdc/ldc on those other archs, so that we can still ship packages for those archs. They will merely require ldc rather than dmd. And obviously, libraries built for that arch will only support the ldc SONAME, not the dmd one. (This may be an argument for bundling both SONAMEs in a single package -- it gets messy if we start shipping multiple libraries, some of which may not be available on all archs. By shipping a single package that includes both versions for ia32/amd64, we can simply omit the DMD-compiled version from other archs.) Conditional build-dependencies are a bit annoying, but with a metapackage "d-compiler" or similar, using different D compilers on different architectures would definitely be possible. Unfortunately, I realize that this means some packages that require the latest DMD would not be available on all archs, if they require features unavailable in gdc/ldc. But this problem may not be a huge one, now that ldc is mostly up-to-date with dmd (at most 1 release behind, IIRC). GDC may lag behind a bit more because it is unfortunately tied to GCC releases, so we may have to forego using gdc for building newer D packages. But we should be able to ship most D packages compiled with both. Compiling with multiple compilers is a really big effort with rather questionable gain, IMO. But as far as LDCs compatibility with other D projects goes: That is really good, the only reason you sometimes can't compile some random D code with LDC might be bugs, but not old standard libraries. Furthermore, I wonder if we should standardize on ldc for most D software instead of dmd, unless that software absolutely depends on features only available on dmd. My reasons are: - While dmd compiles very fast, it consistently (IME) produces code that runs about 20-30% slower than code produced by gdc (and I presume ldc as well). Since we're talking about building Debian packages on Debian's buildd's, which are background batch processes, compilation speed is no big deal, but the performance of the executable *is* a big deal. The last thing we want is to give a poor impression of D by shipping official Debian packages that have subpar performance. - DMD is unlikely to target other archs than ia32/amd64 in the foreseeable future, AFAIK, unless the recent relicensing triumph of dmd's backend makes this future more likely. There will definitely be resistance among DDs because of lack of support for other archs. LDC, in contrast, supports all of Debian's supported archs. - LDC is already available in Debian, meaning that we can already start adding D packages built with ldc without needing to work through the red tape involved in adding a new compiler to the archive. I agree with all of that, I think sticking with LDC might indeed be the least pa
Re: [OT] Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 16:58:05 UTC, Johan Engelen wrote: On Monday, 10 April 2017 at 13:20:00 UTC, Matthias Klumpp wrote: Btw, at time we are just ignore the ABI issues, and surprisingly nothing broke yet, indicating that ABI breakage isn't very common or not affecting commonly used interfaces much. One big ABI change was in 2.071: https://issues.dlang.org/show_bug.cgi?id=15644. And it involved interfaces. ;-) Nothing broke because of that? I am reading release notes, so we rebuilt dependencies of LDC - I have no idea about GDC-depending D code though. But since no bugs were reported, I assume no issues are present :-)
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 16:12:35 UTC, Iain Buclaw wrote: [...] Everyone should follow GDC's ABI, rather than trying to mimic DMD calling convention. ;-) GDC is working very well, and using it would actually be the natural choice for us as GCC is the default compiler. However, there are a few problems with that: 1) GDC doesn't compile a large amount of D code currently out there due to having an outdated standard library and runtime. While we would like to use it, it doesn't help if it just can't compile the things. 2) GDC isn't part of the official GCC. This has quite wide reaching implications, most importantly other distros not offering it - in Fedora / Red Hat, this is the reason why GDC isn't available. This means fewer people test with GDC and we're basically compiling upstream projects with a compiler they never ever tested, potentially creating new issues that we can't easily forward upstream. This is less of an issue with LDC as LDC is more widely available in other distributions.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 14:33:34 UTC, qznc wrote: On Monday, 10 April 2017 at 11:40:12 UTC, Matthias Klumpp wrote: 1) Is there some perspective on D getting a defined ABI that works with all major D compilers? 2) What would the D community recommend on how to deal with the ABI issues currently? A Linux distribution is a bunch of tightly integrated software, and changing one piece in an incompatible way (e.g. by building it with LDC instead of DMD) will have consequences. 3) Will DMD support more architectures in the near future? How should the architecture issue be handled? My prediction for Walters reply: 1) No. Not worth it, because templates, ctfe, etc. That's short-sighted IMHO, because if the template doesn't change, ABI/API doesn't change. Also, some projects use D as better C and don't expose this functionality. It should be up to the project to set the level of API/ABI stability, and not to the compiler to make everything unstable by default. [...] Tentative ping, but that Wiki page is not helpful. The linked svn repo is empty. Where and how do you work? Yeah, the page is really poor, it was last touched in 2012. I made a few updates to at least link to the current Git repo. We generally work on various Git repositories, but not all of them are run by the D team (e.g. libundead and libbiod as well as several games are things I am aware of that aren't D-team maintained but are part of other team's work). One can find all stuff using D by testing the reverse-build-depends on the LDC and GDC compilers. I guess the issue are still the same as you wrote here (except 1. is solved): https://gist.github.com/ximion/fe6264481319dd94c8308b1ea4e8207a So, mostly dub needs work, I guess. Yes, but since Meson is working well and Meson scripts are easy to write, it's not a super high priority item anymore. As I said earlier, work as a distribution developer is pretty much always about reducing long-term maintenance cost, and not about less work short-term, which means we will gladly write Meson or Automake scripts to integrate software into Debian if there is a demand for it. On Monday, 10 April 2017 at 15:11:01 UTC, Jack Stouffer wrote: On Monday, 10 April 2017 at 11:40:12 UTC, Matthias Klumpp wrote: 3) Will DMD support more architectures in the near future? How should the architecture issue be handled? This can be definitively answered as "no", https://issues.dlang.org/show_bug.cgi?id=15108 *Walter Bright*: Doing an ARM back end wouldn't be that hard. It's much less complex than x86. Most of the work would be deleting about half of the x86 code generator :-) :D - doesn't sound like a flat-out no, much more like there just wasn't someone doing the work yet.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 14:21:43 UTC, Gerald wrote: On Monday, 10 April 2017 at 11:40:12 UTC, Matthias Klumpp wrote: There are a two issues though that we will be facing in Debian soon, and I would like to get some opinion and maybe perspective on from the D community on them. First I would like to say thank you for all the work you did in getting Tilix packaged for Debian, it is very much appreciated. While I'm not an expert in this topic I'll throw in my two cents anyway :) Thanks! So to the topic at end, I tend to agree with Vladimir in that I just don't see it as feasible to make every dependency a shared library. Personally I would suggest that only libphobos and libdruntime be managed as shared libraries by a distro. Otherwise, any other dependencies used in DUB or elsewhere would be statically compiled. This would raise the maintenance effort for D software, and require policy exceptions, which is something we won't do. It's more likely that we'll see less D in the distro than going down this route. DUB is unusable for Debian packaging anyway, and alternatives like Automake or Meson handle shared libraries really well, so in itself there is no reason not to do it. If in time a new D library ends up becoming a keystone foundation for many packages it could be considered for inclusion as a shared library. Otherwise, trying to manage every DUB dependency as a potential shared library is a huge amount of work and I don't feel most of them are mature or well maintained enough to support this approach. We'll take care of that, it's what we do as a distro and we have a fair amount of experience in handling these kinds of upstream projects. The goal is to reduce the maintenance cost for things once they are *in* the distro, any prior work to get it into the distribution is very well invested. That's why I have been writing a lot of Makefiles and Meson build definitions lately. If you just concentrate on compiler, libphobos and libdruntime, you can have separate packages for each compiler toolchain which was Arch does. That's a necessity already and we do the same. But with an unstable ABI the standard library is also affected, which would trigger us to do Haskell-style versioning (which mangles dependencies to depend on a virtual package containing a hash of the GHC version), and that not only sucks but also requires quite a lot of manpower. It has a libphobos package for DMD and a liblphobos for LDC. This then enables developers to specify the tool chain they prefer without interference. Yes, but that's not feasible for use *in the distribution itself* as we can't just pick and choose the right compiler per package if something up in the dependency chain was compiled with the "wrong" compiler. In addition to the question about D and C++. what do distros typically do for Rust and Cargo dependencies, or Java and Maven? Wouldn't that be a similar paradigm to D and DUB dependencies? Maven dependencies are also separated out into smaller packages and maven (unlike dub) plays relatively well with Debian. It also has no ABI issues and interface stability can be tracked. Rust only has one compiler which strongly optimizes, so we don't have the problem of choosing the right one. Cargo is/was an issue but it's being worked on and seems to work well now: https://wiki.debian.org/Teams/RustPackaging/Cargo (not that I am not involved in Haskell, Rust or Java packaging). Just because it makes sense: We do have an upstream guide showing some best practices for upstream projects which help us to maintain software for long periods. It also contains language-specific advice and some general info which is valid for all languages: https://wiki.debian.org/UpstreamGuide
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 12:59:58 UTC, qznc wrote: [...] How do Debian and C++ go along? There is no ABI compatibility between GCC and Clang afaik. Clang offers compatibility for most basic features. There are some ABI compatibility issues though and you find them reported in the Clang/libc++ bugtrackers, and it's a pain (but the Clang/LLVM guys think they can do some ABI better/faster than what GCC offers, so some breakage is deliberate). In Debian, GCC compiles everything as the system's default compiler, so at least inside the distribution we don't have to worry about potential incompatibilities. Since GCC also supports an enormous amount of architectures and has strong optimization, the case is different there. In terms of "what happens when users use the OSes C++ libraries and compile with Clang instead of GCC" the situation is similar though: They might run into ABI issues (rarer though than with D). For the distro itself the problem doesn't exist though.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 13:07:22 UTC, Vladimir Panteleev wrote: On Monday, 10 April 2017 at 12:59:37 UTC, Matthias Klumpp wrote: Who came up with those policies and decided that they apply to D? Because I really don't think they should. [...] You need to see here that D is not the center of the world and we will need to make it work nicely with the rest of the system. The opposite is also true: requiring a stable shared library API of every packaged D library is just as unreasonable. In fact, to make these rules useful and applicable to all D programs, you'd have to completely forbid templates in the library's public interface, which would immediately exclude Phobos for one. There is a really easy way to fix this: SONAMEs. Whenever you change something in the library breaking ABI or API, you bump it's SOVERSION, which will force the distribution to perform a transition and rebuild the dependency chain. If you give absolutely zero stability guarantees, you just set the SOVERSION equal to the project's version and trigger a transition every time (incredibly annoying, but, well, okay). This has worked nicely for every language. If you don't have templates in your API or don't change the templates between releases, you can survive with one library for a long time. This is working really great on the level of individual libraries, but if the whole language is ABI-unstable, the issues are much bigger and harder to track. Btw, at time we are just ignore the ABI issues, and surprisingly nothing broke yet, indicating that ABI breakage isn't very common or not affecting commonly used interfaces much.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 12:40:33 UTC, Vladimir Panteleev wrote: [...] Can we treat it more like an interpreted language instead? An interpreted language would interpret the code on the target system at runtime. This is not what D does, so we can't really treat it like we treat Python (where it is possible no neatly separate Python modules into separate packages - transitions triggering rebuild cascades only exist when we jump to the next major CPython version, which is something distributions are well prepared for. Transitions are only an issue if they happen constantly). At time, D is treated like C++, since it has much of the same challenges and we know how to deal with C++ - additionally to C++, D unfortunately though also has the unique issues outlined above, which complicate things. I also want to stress that having a single C++ library like Boost compiled into stuff and rolling dependency transitions when its API/ABI changes with a major release is less of a problem than having the entire language give zero stability and interoperability guarantees on anything that is compiled with it.
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 12:40:33 UTC, Vladimir Panteleev wrote: On Monday, 10 April 2017 at 11:40:12 UTC, Matthias Klumpp wrote: Recompiling the dependency-chain of a software from source when compiling a package using the "right" compiler and statically adding the code is forbidden by distro policy. This is the part that I do not understand. Who came up with those policies and decided that they apply to D? Because I really don't think they should. They are the result of years of experience in building complex systems and keeping them secure. If you have a dependency chain "X -> Y -> Z" (-> meaning "depends on"), and you find a security bug in Z, you the security team will just need to fix the bug in Z to resolve it in the whole distribution. But if the code which has this issue is compiled into all of the packages that depend on them, you will need to rebuild the full dependency chain to actually fix the security issue, which is not only time intensive but also a huge maintenance effort. In this simple example it doesn't look like much, but those dependency chains can grow massively large and complicated, and the only way to keep the large software stack maintainable and secure is by splitting pieces cleanly. Embedded code copies are allowed in rare events, but in these cases the security team needs to be aware of them. Sometimes, the licenses also explicitly prevent embedded code copies. Aside from these issues, splitting things cleanly also makes general package maintenance much easier, and adds flexibility for our users who can mix and match parts of the distribution as they like and combine them with their own code. You need to see here that D is not the center of the world and we will need to make it work nicely with the rest of the system. The technical policies work for everything else, so there is nothing that really justifies an exception for D here (if 10% of Debian's code was written in D and the Debian D team was really large we could maybe get one, but not the way it is now). And tbh, I think finding a good solution here is entirely possible.
The D ecosystem in Debian with free-as-in-freedom DMD
Hi there! These are probably questions directed mostly at Walter and others shaping D's goals, but this could be of general interest for many people, so to the forum it goes :-) DMD is completely free software now and we can legally distribute it in Debian main - yay! This is an awesome achievement and will make D adoption in Linux distributions much easier (in fact, Red Hat is working on getting good support into Fedora too already). There are a two issues though that we will be facing in Debian soon, and I would like to get some opinion and maybe perspective on from the D community on them. Naturally, when the reference compiler is available in Debian, we would compile everything with that, as it is the development focus and the thing many people test with. We do, however, have quite a bit of bioinformatics and other D software in the archive where performance matters - so our users and the developers of that software (like BioD, potentially Mir, maybe even Vibe.d) will want the fastest performance and will ask us to compile the libraries with LDC or GDC. If we do that, we will run into the D ABI trap: Libraries compiled with compiler X can not be used from software compiled with D compiler Y. There is actually no ABI stability guarantee even between DMD releases. This will make integrating D a huge pain. Recompiling the dependency-chain of a software from source when compiling a package using the "right" compiler and statically adding the code is forbidden by distro policy. Having static libraries in the dependencies doesn't solve the issue. Compiling each library with all D compilers is highly impractical and not really feasible. So, how should we proceed here? We could make it "DMD is the only thing on the highway" compiling everything with DMD with zero exceptions, which would leave us with only DMD-internal ABI breakage and bad D code performance for some libraries. We could also continue using LDC for everything, but that comes with it's own issues (I am hitting quite a bunch of LDC bugs and upstream projects usually test with DMD or use features which are only in the latest DMD). The other issue is architecture support. In Debian we are strongly encouraged to support as many architectures as possible, to the point of having to justify why arch X is not supported. LDC runs on at least armhf and ppc64el additionally to amd64/ia32, while DMD says it's specifically only for ia32/amd64. This means we might end up compiling stuff with different D compilers on different architectures, or we will need to drop architectures and request arch-specific package removals for things that currently build on architectures not supported by DMD, which will trigger resistance. So, in summary: 1) Is there some perspective on D getting a defined ABI that works with all major D compilers? 2) What would the D community recommend on how to deal with the ABI issues currently? A Linux distribution is a bunch of tightly integrated software, and changing one piece in an incompatible way (e.g. by building it with LDC instead of DMD) will have consequences. 3) Will DMD support more architectures in the near future? How should the architecture issue be handled? I am interested in some feedback here, since I currently can't see a good way to address these issues. Also: If you want to help out Debian's D team, feel free to ping me or any other D team member (we are very short handed and are only two active people right now). See https://wiki.debian.org/D (caution, wiki page is very outdated, last touched in 2012)
Re: Exceptions in @nogc code
On Saturday, 1 April 2017 at 13:34:58 UTC, Andrei Alexandrescu wrote: Walter and I discussed the following promising setup: Use "throw new scope Exception" from @nogc code. That will cause the exception to be allocated in a special stack-like region. If the catching code uses "catch (scope Exception obj)", then a reference to the exception thus created will be passed to catch. At the end of the catch block there's no outstanding reference to "obj" so it will be freed. All @nogc code must use this form of catch. If the catching code uses "catch (Exception obj)", the exception is cloned on the gc heap and then freed. Finally, if an exception is thrown with "throw new Exception" it can be caught with "catch (scope Exception obj)" by copying the exception from the heap into the special region, and then freeing the exception on the heap. Such a scheme preserves backward compatibility and leverages the work done on "scope". Andrei How would you deal with the Exception payload, e.g. the message string ? If I have to place it on the heap, I lose @nogc, if I place it in the local scope, it will be destroyed before the Exception handler is called and if I restrict the mechanism to fixed payloads, I could have used static Exception objects in the first place. It may be possible to copy the message string from local scope to the special stack on throw, but in the general case the payload could be an arbitrary tree of Objects allocated on the heap or the stack, or even on the special exception stack itself. A method I used to pass scope data up the stack is by using "thrower delegates", i.e. the @nogc throwing function doesn't directly throw an Exception, but it calls a delegate that is passed all the payloads as arguments and which throws a static Exception. This thrower delegate is provided by the Exception catching function, and has the opportunity to read/copy the parts of the payload it is interested in before the stack is unwound.
Re: D-Apt package numbers
On Monday, 20 March 2017 at 08:52:05 UTC, Russel Winder wrote: I see that D-Apt has the Debian revision number on packages starting at 0. I had understood that the policy was to start at 1. For stuff in *Debian* that is true, anything not in Debian should start at zero and add a "repository tag" to the Debian revision, if the package is new or has a new upstream version in their repository. E.g. in Ubuntu, if the upstream version is "1.0", the revision Ubuntu chooses is "1.0-0ubuntu1". If a Debian package is modified, the "ubuntuX" tag is added to the Debian revision. This ensures that Debian packages are preferred if they are available and that users as well as the Debian package maintainer knows where stuff was coming from when people report bugs (less of an issue on Ubuntu and other derivatives, there the tags are required to properly produce deltas between Debian and the derivative and to merge packages from Debian safely). So, if d-apt is doing something like this, everything is fine :-)
Re: The end of curl (in phobos)
On Saturday, 18 February 2017 at 22:48:53 UTC, Dmitry Olshansky wrote: [...] For some time I was a proponent of yanking stuff from Phobos into oblivion. Now I'm not. Stop breaking code. Yes, we should think harder before introducing libraries into Phobos but continuing on with removal of stuff just introduces the continuous churn for no adequate benefit. What would anyone get from std.net.curl removal? Are you going to find all of D programs and patch them to use something else? Yes, Phobos is full of historical accidents and cruft. I'm constantly tempted to propose Phobos v2 properly _designed_ (not *grown*) and without the junk. I really think it might be a good idea but only when we actually know what a proper design looks like. Due to writing the AppStream metadata generator in D, which is an infrastructure piece of many Linux distributions, I have a fair bit of knowledge now about the problems people (especially newcomers who just want to scratch an itch and submit a quick patch) encounter when working with D code. The inconsistent standard library is - after compiler bugs - the biggest issue. Some people described Phobos as "PHP-esque" in terms of design, and I have to agree with them. Working with it is often unpleasant, and you can clearly see which parts of it were designed recently and which are "historical accidents". Constantly shuffling stuff around in Phobos and adding/removing things will not solve the problem, it will just give newcomers a feeling of insecurity and make the language feel less mature. Stunningly, a lot of projects write their own primitives instead of using Phobos (Vibe, lots of dub modules just providing containers, just now another general purpose utility library was announced on the forums, ...) which is a clear sign that Phobos isn't seen to be sufficient. I think investigating to build a "Phobos2" standard library would be a very good idea - make it opt-in for a while, and then set a flag-date and switch, so people will only need to adjust their code once to jump on the new version, and don't constantly need to play catch with Phobos API breaks and riddle their code with version() and static if instructions to be able to compile with multiple Phobos and LDC/GDC/DMD versions. Cheers, Matthias
Re: D at FOSDEM this weekend
On Friday, 3 February 2017 at 06:59:32 UTC, David Nadlinger wrote: Hi all, This year's FOSDEM is taking place Saturday–Sunday in Brussels (registration-less open source software event). Any D heads in the area? Kai Nacke is going to give a talk on PGO in LDC in the LLVM dev room [1], and I'll also be around. Too bad that I am not there this year :( (first FOSDEM to miss in a while). Would have been nice to meet! Enjoy the conference and the Belgian beer!
Re: It is still not possible to use D on debian/ubuntu
On Saturday, 14 January 2017 at 23:24:18 UTC, Jack Applegame wrote: On Saturday, 14 January 2017 at 18:41:21 UTC, Russel Winder wrote: On Sat, 2017-01-14 at 17:28 +, Elronnd via Digitalmars-d wrote: On Friday, 13 January 2017 at 11:50:25 UTC, Russel Winder wrote: > LDC which is packaged by both Debian and Fedora is the > only practically usable D compiler on both these platforms. What's impractical about downloading and installing an rpm? For that matter, downloading the source and compiling it isn't all that impractical either. Downloading and installing an RPM outside of dnf. What do you mean "outside"? I use DMD on CentOS, and I installed it by command: yum install http://downloads.dlang.org/releases/2.x/2.072.2/dmd-2.072.2-0.fedora.x86_64.rpm From your point of view it is "outside" or "inside" of yum? Still outside because it is not developed as part of Fedora. One big reason for getting a language's toolchain into the main repositories of a distribution that Russel didn't mention yet is the additional QA and testing it will get. For example, the PIE/PIC issue would not have happened at all if people were using the tools provided by the distribution, because we made sure that every tool we ship works with this change. Using pieces that are part of the distribution is also way easier than getting them from external sources, also mainly because we can give a lot of guarantees about software we ship in a distribution. And also, D software that is itself part of the distro will be compiled with one of the purely-free compilers anyway, so if you target one of those it just makes sense to primarily use LDC or GDC to ensure the software works well.
Re: Red Hat's issues in considering the D language
On Friday, 23 December 2016 at 15:02:23 UTC, Ilya Yaroshenko wrote: [...] It is not true for Mir projects, sometimes ICE occurs without any description while LDC just works. --Ilya Bug report for ICEs requires to much efforts because code size should be reduced. I found quite a few in LDC too ;-) In any case, Dustmite[1] helps to greatly reduce the time needed to create a minimal testcase to report as a bug, and the tool will soon be available as a Debian package as well (anything that doesn't use dub and is no library is rather easy to package). [1]: https://github.com/CyberShadow/DustMite
Re: Red Hat's issues in considering the D language
To clarify this point on the list: On Thursday, 22 December 2016 at 10:40:32 UTC, Kagamin wrote: On Tuesday, 20 December 2016 at 23:08:28 UTC, Andrei Alexandrescu wrote: https://gist.github.com/ximion/77dda83a9926f892c9a4fa0074d6bf2b Aren't requirements for packaging and recent versions mutually exclusive? The packaged version will undergo version freeze and will be older than the recent version no matter what you package. This is true when the distribution is frozen, but there is a time when we will just get new software versions in there as soon as they are released. But the much more important point for us is support and maintainability. The reference compiler will have a much bigger development team and higher focus of attention. Additionally, people will likely build their code with that compiler and might not test with other compilers. So, if we then take D code and build it with a configuration upstream didn't test, and then encounter a bug, this will be an additional obstacle to overcome when communicating with upstream. Furthermore, the compiler package - once frozen - will have to be supported for many years, and a bigger team behind it helps in finding issues and fixing them. Additionally, people learning D will told "use DMD" and won't find it in their distribution, which is annoying for them (they think D isn't well supported, while our LDC/GDC packages are less used). Those are all points which make it useful to have a completely free compiler as reference compiler and in the distribution. On the point of "free" being ideological: It of course is also an ideological issue, but that's not the only point. See this excerpt from the DMD backend license: ``` The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars. ``` This alone makes it impossible for use and all our derivatives to legally redistribute the software. Adding it to a distro is a no-op. Also, licenses restricting modification of software or proprietary software in general makes it impossible for use to deliver security fixes, and also makes integrating the software into the system much harder and sometimes impossible. For GDC: Being part of GCC would be very awesome there, because then the Toolchain team of the respective distributions could easily make the D compiler available and maintain it (as done with e.g. gccgo for the Go language). At time we patch in GDC and Debian, but it looks like Red Hat will not go that way on RHEL/Fedora (and I completely understand why they don't want to do that). Anyway, it's great to hear that the GDC Phobos isn't as old anymore as it was when I wrote the first version of the list :) Confusing claim that he can't use dmd given that he says he uses it. Huh? Where is this stated?
Re: Installing ldc breaks gdc
Hi! This issue should be fixed since LDC 1:1.1.0-2, which Xenial doesn't have. Ideally, fetch a newer version from Debian or a PPA to solve this issue. Cheers, Matthias
Re: calling convention optimisation & const/immutable ref
On Tuesday, 8 November 2016 at 12:56:10 UTC, Johan Engelen wrote: On Monday, 7 November 2016 at 16:48:55 UTC, John Colvin wrote: [...] This reminds me of an LLVM presentation by Chandler, mentioning that passing by reference may hamper the optimization of code (because memory becomes involved). I don't think he gave a clear example, but one thing I can think of is potential aliasing: void foo(ref LargeThing a, ref LargeThing b), where a and b may alias partly. To do the "ref" optimization for _all_ large function parameters per default sounds like a bad idea. Also, how would I turn the optimization off (I guess extern(C++) would do that?)? What if my function needs a local copy (say, in multithreaded stuff), and I want that copy to be const? -Johan Aliasing is not an issue when LargeThing is immutable, is it ?
Re: If Statement with Declaration
On Thursday, 3 November 2016 at 22:29:34 UTC, Jerry wrote: if(int i = someFunc(); i >= 0) { // use i } Thoughts on this sort of feature? I would prefer the syntax if( (int i = someFunc()) >= 0 ) { // use i } as this matches the already existing assignment expression syntax if you pull the declaration out of the if expression.
Re: [OT] fastest fibbonacci
On Sunday, 23 October 2016 at 23:17:28 UTC, Stefam Koch wrote: On Sunday, 23 October 2016 at 19:59:16 UTC, Minas Mina wrote: On Sunday, 23 October 2016 at 13:04:30 UTC, Stefam Koch wrote: Hi Guys, while brushing up on my C and algorithm skills, accidently created a version of fibbonaci which I deem to be faster then the other ones floating around. It's also more concise the code is : int computeFib(int n) { int t = 1; int result = 0; while(n--) { result = t - result; t = t + result; } return result; } You can even calculate Fibonacci in O(1). An approximation of it. The fibonacci sequence can be represented exactly as a linear combination of two exponential functions, but the two bases of the exponentials and the linear multipliers of them are irrational numbers, which cannot be represented exactly on a computer. However the rounding error is so small, that rounding to int will give you always the correct answer as long as you stay within the precision limit of the floating point type you use, e.g. a real should give you 64-bit fibonacci in O(1), if the exponential function is O(1). PS: the exact formula is fib(n) = 1/sqrt(5) * (0.5 + 0.5sqrt(5))^n - 1/sqrt(5) * (0.5 - 0.5sqrt(5))^n. If you round to integer anyway, the second term can be ignored as it is always <= 0.5.
Re: core.intrinsics
On Friday, 14 October 2016 at 15:42:22 UTC, David Nadlinger wrote: On Friday, 14 October 2016 at 13:07:10 UTC, Johan Engelen wrote: On Friday, 14 October 2016 at 12:55:17 UTC, Guillaume Piolat wrote: - this pointer is aligned to N bytes - this pointer doesn't alias with this pointer Do you mean these as "just a hint, should not generate invalid code if not true" or as "a certainty, allowed to generate invalid code if not true" ? For alignment/aliasing restrictions to be really beneficial, you have to be able to assume they hold, though. — David You could turn "hints" that can possibly create invalid code automatically into assertions in non-release builds. Or let the user add an assertion and use the "turn assert() into assume()" idea for release builds.
Re: Can you shrink it further?
On Wednesday, 12 October 2016 at 09:23:53 UTC, Stefan Koch wrote: On Wednesday, 12 October 2016 at 08:56:59 UTC, Matthias Bentrup wrote: [...] All three are slower than baseline, for my test-case. What did you test it against. The blns.txt file mentioned upthread.
Re: Can you shrink it further?
On Tuesday, 11 October 2016 at 15:01:47 UTC, Andrei Alexandrescu wrote: On 10/11/2016 10:49 AM, Matthias Bentrup wrote: void popFrontAsmIntel(ref char[] s) @trusted pure nothrow { immutable c = s[0]; if (c < 0x80) { s = s[1 .. $]; } else { uint l = void; asm pure nothrow @nogc { mov EAX, 1; mov BL, 0xf8-1; sub BL, c; cmp BL, 0xf8-0xc0; adc EAX, 0; cmp BL, 0xf8-0xe0; adc EAX, 0; cmp BL, 0xf8-0xf0; adc EAX, 0; mov l, EAX; } s = s[l <= $ ? l : $ .. $]; } } Did you take a look at the codegen on http://ldc.acomirei.ru? It's huge. -- Andrei Here are three branch-less variants that use the sign instead of the carry bit. The last one is the fastest on my machine, although it mixes the rare error case and the common 1-byte case into one branch. void popFront1(ref char[] s) @trusted pure nothrow { immutable c = cast(byte)s[0]; if (c >= 0) { s = s[1 .. $]; } else if (c < -8) { uint i = 4 + (c + 64 >> 31) + (c + 32 >> 31) + (c + 16 >> 31); import std.algorithm; s = s[min(i, $) .. $]; } else { s = s[1 .. $]; } } void popFront1a(ref char[] s) @trusted pure nothrow { immutable c = cast(byte)s[0]; if (c >= 0) {Three s = s[1 .. $]; } else { uint i = 1 + ((3 + (c + 64 >> 31) + (c + 32 >> 31) + (c + 16 >> 31)) & (c + 8 >> 31)); import std.algorithm; s = s[min(i, $) .. $]; } } void popFront1b(ref char[] s) @trusted pure nothrow { immutable c = cast(byte)s[0]; if (c >= -8) { s = s[1 .. $]; } else { uint i = 4 + (c + 64 >> 31) + (c + 32 >> 31) + (c + 16 >> 31); import std.algorithm; s = s[min(i, $) .. $]; } }
Re: Can you shrink it further?
On Tuesday, 11 October 2016 at 14:24:56 UTC, Andrei Alexandrescu wrote: On 10/11/2016 03:30 AM, Matthias Bentrup wrote: A branch-free version: void popFront4(ref char[] s) @trusted pure nothrow { immutable c = s[0]; uint char_length = 1 + (c >= 192) + (c >= 240) + (c >= 248); s = s.ptr[char_length .. s.length]; } Theoretically the char_length could be computed with three sub and addc instructions, but no compiler is smart enough to detect that. Interesting. 0x80 should be special-cased and you forgot to check the bounds. So: void popFront4(ref char[] s) @trusted pure nothrow { immutable c = s[0]; if (c < 0x80) { s = s.ptr[1 .. s.length]; } else { uint l = 1 + (c >= 192) + (c >= 240) + (c >= 248); s = s.ptr[l <= s.length ? l : s.length .. s.length]; } } This generated 27 instructions, i.e. same as the baseline. See: http://ldc.acomirei.ru/#compilers:!((compiler:ldc,options:'-release+-O3+-boundscheck%3Doff',sourcez:G4ewlgJgBADiMDEBOIB2AXAjACiQUwDMoBjACwEMkBtAXSgGcBKKAAXSQFd709oYP8UVCHSkUAdygBvAFBQoYALaKO6cgCMANnhJQAvAyoAGGgG45CotmJQAPFCMAPABxHms%2BfPr6GAOhjsVJhQvr5%2B2qgA5qJmFgC%2BUHia9DoenkpwSOgkIPi%2B6mDo8OaeUBxgGAo%2BAOwcUAC0UOr0SNgAfjYAPlCYHIwl6YqZ2dwQvuSakbmFpIoD8mBWYFAAfFAAbH1VBpjzDD70/oGKFdhgADTheFGizKFXN6Sx8nEyrzKgkLDwyGjoAEy4QgkCjUOjcJDMNicbi8WACHTCUQSaQWJQqNRaHQ2AwQ4zPSxQax2BwuNwWNLyAD0VICSAU3i4cKKUHIn2gHFQXLwxDw9HolAAnk0QJyIN4yDyANYVSK%2BCxedgHdhHajBe4Q3wRaJPAaveRJFKE4l6AxOAgERgUhVQGlNFplVAQQgVOEEXIOG0Q5VIVVBEJhTXamJ6iyGvDW0oZXLZYi5PD5QrwKAALntSD25FUICginozRqDXT7WI/RtiyJ2DzBfs/2Y3Sr%2Be8a3WjCtpUpnhpAElUMAJl8AKoAFQQ9WcNvk1e8Oz2%2BsGwwY6DGEymSBmcy9StxKrpVBOqEbzUuQeuOrugZVwd18TeMiAA)),filterAsm:(commentOnly:!t,directives:!t,labels:!t),version:3 But it doesn't seem to check for all errors. Andrei This is the result I'd like to get, but I can't find a way to write it without inline assembly :( void popFrontAsmIntel(ref char[] s) @trusted pure nothrow { immutable c = s[0]; if (c < 0x80) { s = s[1 .. $]; } else { uint l = void; asm pure nothrow @nogc { mov EAX, 1; mov BL, 0xf8-1; sub BL, c; cmp BL, 0xf8-0xc0; adc EAX, 0; cmp BL, 0xf8-0xe0; adc EAX, 0; cmp BL, 0xf8-0xf0; adc EAX, 0; mov l, EAX; } s = s[l <= $ ? l : $ .. $]; } }
Re: Can you shrink it further?
On Tuesday, 11 October 2016 at 04:05:47 UTC, Stefan Koch wrote: On Tuesday, 11 October 2016 at 03:58:59 UTC, Andrei Alexandrescu wrote: On 10/10/16 11:00 PM, Stefan Koch wrote: On Tuesday, 11 October 2016 at 02:48:22 UTC, Andrei Alexandrescu wrote: [...] If you want to skip a byte it's easy to do as well. void popFront3(ref char[] s) @trusted pure nothrow { immutable c = s[0]; uint char_length = 1; if (c < 127) { Lend : s = s.ptr[char_length .. s.length]; } else { if ((c & b01100_) == 0b1000_) { //just skip one in case this is not the beginning of a code-point char goto Lend; } if (c < 192) { char_length = 2; goto Lend; } if (c < 240) { char_length = 3; goto Lend; } if (c < 248) { char_length = 4; goto Lend; } } } Affirmative. That's identical to the code in "[ ... ]" :o). Generated code still does a jmp forward though. -- Andrei It was not identical. ((c & b01100_) == 0b1000_)) Can be true in all of the 3 following cases. If we do not do a jmp to return here, we cannot guarantee that we will not skip over the next valid char. Thereby corrupting already corrupt strings even more. For best performance we need to leave the gotos in there. A branch-free version: void popFront4(ref char[] s) @trusted pure nothrow { immutable c = s[0]; uint char_length = 1 + (c >= 192) + (c >= 240) + (c >= 248); s = s.ptr[char_length .. s.length]; } Theoretically the char_length could be computed with three sub and addc instructions, but no compiler is smart enough to detect that.
Re: Required DMD changes for Mir and few thoughts about D future
On Saturday, 8 October 2016 at 18:53:32 UTC, Andrei Alexandrescu wrote: On 10/8/16 2:49 PM, Andrei Alexandrescu wrote: On 10/8/16 1:22 PM, Martin Nowak wrote: Integrating this with a pre-compiled ldc library is a fantastic idea OTOH. If we can make this work, it will be much less effort and yield the fastest implementation. Also would speed up the development cycle a bit b/c the kernels don't need to be recompiled/optimized. You mean dmd/ldc/etc interop at binary level? Yes, that would be pretty rad indeed! -- Andrei (after thinking a bit more) ... but Mir seems to rely in good part on templates, which makes pre-compiled libraries less effective. -- Andrei Independent from Mir, a stable ABI for D which all compilers follow would be a tremendous win, especially from the perspective of shipping D stuff in Linux distributions. So maybe this is worth attempting?
Re: Examples Wanted: Usages of "body" as a Symbol Name
On Wednesday, 5 October 2016 at 16:57:42 UTC, Rory McGuire wrote: On Wed, Oct 5, 2016 at 5:32 PM, angel via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On Wednesday, 5 October 2016 at 02:11:14 UTC, Meta wrote: [...] Really, why do we need a _body_ ? We have pre-condition and post-condition (in and out), everything else is a body. It is simply inconsistent - a regular function with no in and out blocks has no body block. Now one adds a pre-condition (and / or post-condition) - whoop - one needs to wrap the whole function body ... well in a body expression. Recently I've had to use scope_ a lot more often than body_ but reserved keywords are really annoying, so the less we have the better :D Agreed - I have exactly the same problem with "version", which is also really common for, well, to hold a version number of a component. Body is annoying too. But, can keywords actually sanely be removed from the language without breaking the world?
Re: Overloading relational operators separately; thoughts?
On Thursday, 29 September 2016 at 18:38:42 UTC, Jonathan M Davis wrote: Then you could always do something like a.myOp!"<"(b) and a.myOp!">="(b) if you still want to have the operator in there somewhere. You can name the functions whatever you want. You just can't use overloaded operators for it, since it would not be in line with what the operators are supposed to mean and be used for. - Jonathan M Davis Also with mixins and a CTFE D parser you could replace the operators by any desired function calls.
Re: Overloading relational operators separately; thoughts?
On Wednesday, 28 September 2016 at 04:02:59 UTC, Walter Bright wrote: The limitations are deliberate based on the idea that comparison operators need to be consistent and predictable, and should have a close relationship to the mathematical meaning of the operators. In Mathematics the comparison operators are also commonly used for semi orders, which cannot be implemented by opCmp, because opCmp has no way to indicate that two values are incomparable. Interestingly the floating point types are semi ordered (due to NaNs), and for those D has some (non-overridable and deprecated) operators like !<, which would be easily extendable to any semi order, whereas the suggested replacement (i.e. test for NaNs manually) works only on floats.
Re: integral to floating point conversion
On Saturday, 2 July 2016 at 20:30:03 UTC, Walter Bright wrote: On 7/2/2016 1:17 PM, Andrei Alexandrescu wrote: So what's the fastest way to figure that an integral is convertible to a floating point value precisely (i.e. no other integral converts to the same floating point value)? Thanks! -- Andrei Test that its absolute value is <= the largest unsigned value represented by the float's mantissa bits. That has to be '<' given the condition that no other integer converts to the same value. Although 2^n can be represented exactly, 2^n+1 would be converted to the same float value.
Re: About GC: The Future of Rust : GC integration
On Wednesday, 8 June 2016 at 03:19:18 UTC, Jack Stouffer wrote: On Wednesday, 8 June 2016 at 03:10:32 UTC, Eugene Wissner wrote: In D some very important things like exceptions depend on GC. This is a common misconception. Exceptions do not have to use the GC, they just often are. All you have to do is malloc an exception and then throw it, and then remember to free it after you catch it up the call stack. The Phobos developers made the decision to use the GC in order to be @safe rather than fast. Exceptions and memory allocation are a pain to use anyway. When you call a function that calls a function that calls a function, and you get an Exception, how do you know how to properly deallocate it ? And it's not just the Exception Object itself, usually you also have to allocate at least a string for the Exception message.
Re: Free the DMD backend
On Wednesday, 1 June 2016 at 01:26:53 UTC, Eugene Wissner wrote: On Tuesday, 31 May 2016 at 20:12:33 UTC, Russel Winder wrote: On Tue, 2016-05-31 at 10:09 +, Atila Neves via Digitalmars-d wrote: […] No, no, no, no. We had LDC be the default already on Arch Linux for a while and it was a royal pain. I want to choose to use LDC when and if I need performance. Otherwise, I want my projects to compile as fast possible and be able to use all the shiny new features. So write a new backend for DMD the licence of which allows DMD to be in Debian and Fedora. LDC shouldn't be the default compiler to be included in Debian or Fedora. Reference compiler and the default D compiler in a particular distribution are two independent things. Exactly. But since we can legally distribute DMD in e.g. Debian, and DMD is the reference compiler, we will build software in Debian with a compiler that upstream might not have tested. Additionally, new people usually try out a language with the default compiler found in their Linux distribution, and there is a chance that the reference compiler and default free compiler differ, which is just additional pain and plain weird in the Linux world. E.g. think of Python. Everyone uses and tests with CPython, although there are other interpreters available. If CPython would be non-free, distros would need to compile with a free compiler, e.g. PyPy, which is potentially not feature complete, leading to a split in the Python ecosystem between what the reference compiler (CPython) does, and what people actually use in Linux distributions (PyPy). Those compilers might use different language versions, or have a different standard library or runtime, making the issue worse. Fortunately, CPython is completely free, so we don't really have that issue ;-)
Re: Free the DMD backend
On Sunday, 29 May 2016 at 10:56:57 UTC, Russel Winder wrote: On Sun, 2016-05-29 at 04:08 +, Joakim via Digitalmars-d wrote: […] It would be nice if that happened, but Walter has said Symantec isn't interested. Aren't ldc and GDC enough? This is why LDC should be seen in the D community as the main production toolchain, and Dub should default to LDC for compilation. This is something which has been asked on my blog[1], and I do agree that having a completely free-as-in-freedom reference compiler would be an awesome win for the D ecosystem, and would pretty much kill most of the issues we have at distros to package D stuff. D is very unique with its half-proprietary compiler. LDC seems to be a pretty good fit for replacing the backend. Shifting to LDC as reference compiler would basically mean to slowly give up DMD though, because other than being tested much, there wouldn't be a compelling reason to still use it when focus has shifted to LDC / GDC. In any case, this is definitely something for Walter and Andrei to decide, and I do have a feeling that this question might have been raised already in the past... [1]: http://blog.tenstral.net/2016/05/adventures-in-d-programming.html#comment-265879
Re: Always false float comparisons
On Wednesday, 18 May 2016 at 14:29:42 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 18 May 2016 at 12:27:38 UTC, Ola Fosheim Grøstad wrote: And yes, half-precision is only 10 bits. Actually, it turns out that the mantissa is 11 bits. So it clearly plays louder than other floats. ;-) The mantissa is 10 bits, but it has 11 bit precision, just as the float type has a 23 bit mantissa and 24 bit precision. AFAIK the only float format that stores the always-one-bit of the mantissa is the x87 80 bit format.
Re: Always false float comparisons
If you try to make compile-time FP math behave exactly like run-time FP math, you'd not only have to use the same precision in the compiler, but also the same rounding mode, denormal handling etc., which can be changed at run time (http://dlang.org/phobos/core_stdc_fenv.html), so the exact same piece of code can yield different results based on the run-time FP-context.
Re: Checking if an Integer is an Exact Binary Power
On Tuesday, 26 April 2016 at 08:12:02 UTC, Dominikus Dittes Scherkl wrote: On Monday, 25 April 2016 at 22:42:38 UTC, deadalnix wrote: x & -x is the smallest power of 2 that divides x. Basically, if x = 1000 , x & -x = 1000 . This is easy to proves considering -x = ~x + 1 . Now, x >> 1 will be of the form 0100 . If one of these digit is one, then (x >> 1) >= (x & -x) . If they are all zeros, (x >> 1) < (x & -x) which is the case where you have a power of 2. 0 is a special case, can it can be checked that the function return false for this specific input. This looks like it is correct. Yes. Except for the case 0x8000 (= int.min), because this is negative so NOT smaller than 0x4000 (= int.min>>>1), which is considered positive. So the algorithm doesn't work for signed integers (without extra cast). Well, it depends a bit on how you define "is a power of two": If you define "x is a power of 2" as "there is a natural number n, so that x == 2*2*...*2 (n-times)" with * defined as multiplication in the ring of integers modulo 2^32 (i.e. int or uint), then int.min and 0 are powers of two because the multiplication overflows. If you define "x is a power of 2" as above but based on the multiplication in the ring of integers, then int.min (== -2^31) and 0 are not powers of two.
Re: Distributor's whishlist and questions for D
On Thursday, 21 April 2016 at 09:07:57 UTC, Johan Engelen wrote: On Thursday, 21 April 2016 at 01:01:01 UTC, Matthias Klumpp wrote: The question here is also, which compiler should be the default (which IMHO would be the most complete, most bug-free actively maintained one ^^). Is performance of the outputted code a criterium? Or the number of supported architectures? The number of supported architectures is at least a criterium for Debian (speed is also important, obviously ^^) - for arch support, one could set a different compiler on different architectures though...
Re: Distributor's whishlist and questions for D
On Thursday, 21 April 2016 at 11:58:23 UTC, Kagamin wrote: On Thursday, 21 April 2016 at 01:01:01 UTC, Matthias Klumpp wrote: [...] Many D users are enthusiasts and push the compiler to its limits, they are usually stuck with DMD (even DMD HEAD sometimes) as it provides the latest fixes. It depends on the coding style, AppStream generator is an example of old good plain business logic one is unlikely to need recent frontend for, but for people writing black magic code a couple of versions lag of free compilers behind DMD is usually a blocker, but whoever uses the free compilers already considers them complete enough for their tasks. Currently LDC looks like the most actively maintained one. Asgen is super-boring code ;-) Mainly because the task it performs can be represented without using much black magic. But still, in order to make it work, I needed to embed a copy of std.concurrency.Generator in my code, because GDCs Phobos didn't contain that thing yet (and it's *so* useful!). Basically, the huge differences in the standard versions is what annoyed me the most in D, since it also means you can't trust the documentation much, depending on the compiler you use. For someone new to D, this is annoying. LDC also fails to compile asgen: ``` /usr/include/d/std/parallelism.d-mixin-3811(3837): Error: template core.atomic.atomicOp cannot deduce function from argument types !("+=")(shared(ulong), int), candidates are: /usr/include/d/core/atomic.d(178): core.atomic.atomicOp(string op, T, V1)(ref shared T val, V1 mod) if (__traits(compiles, mixin("val" ~ op ~ "mod"))) /usr/include/d/std/parallelism.d-mixin-3811(3837): Error: template core.atomic.atomicOp cannot deduce function from argument types !("+=")(shared(ulong), int), candidates are: /usr/include/d/core/atomic.d(178): core.atomic.atomicOp(string op, T, V1)(ref shared T val, V1 mod) if (__traits(compiles, mixin("val" ~ op ~ "mod"))) /usr/include/d/std/parallelism.d-mixin-3823(3849): Error: template core.atomic.atomicOp cannot deduce function from argument types !("+=")(shared(ulong), int), candidates are: /usr/include/d/core/atomic.d(178): core.atomic.atomicOp(string op, T, V1)(ref shared T val, V1 mod) if (__traits(compiles, mixin("val" ~ op ~ "mod"))) /usr/include/d/std/parallelism.d(3344): Error: template instance std.parallelism.ParallelForeach!(Package[]) error instantiating source/engine.d(90):instantiated from here: parallel!(Package[]) ``` while GDC compiles the code flawlessly (LDC previously even crashed, but that was with the beta version). I will investigate why this happens now, but it's basically these small things which make working with D less fun, at least when you want to use a system without proprietary components.
Re: Distributor's whishlist and questions for D
Hi, and thanks for your detailed explanations! On Thursday, 21 April 2016 at 11:49:13 UTC, Johannes Pfau wrote: On Thursday, 21 April 2016 at 01:01:01 UTC, Matthias Klumpp wrote: [...] You currently can't install druntime or phobos headers in this directory, as each compiler will have slightly modified versions and then you end up with /usr/include/d/(dmd|gdc|ldc). That doesn't seem to be the case for LDC on Debian... It installs Phobos into /usr/include/d/std, which makes GDC go crazy as soon as LDC is installed too. I suspect this is a packaging bug, if so, I'll report a bug against LDC. But all other library headers should be shareable between compilers and can be placed in /usr/include/d. (This might even work for phobos, I don't think we have many compiler specific changes there. But then you need to use the same frontend version for all compilers.) This is probably a very naive question, but: Isn't the D specification finished? If the language is done, why would there be a dependence of Phobos on a specific compiler? Or is this because the new code in Phobos might expose bugs in the compiler itself, which causes these incompatibilities? You can only install headers for one library version with this approach! A versioned approach is nicer /usr/include/d/libfoo/1.0.0 but requires explicit compiler support and it's unlikely this will happen (or explicit dub support and you compile everything through dub). Would be nice, but since we - most of the time - only allow one version of a specific software package to be present in distributions, it isn't a huge issue. ## dub: Where should dub modules be installed? What does FHS recommend in this case? If you only keep headers/sources you could install into /usr/include. By the FHS, /usr/include/d should be okay for headers (well, the spec explicitly says C programming language headers, but well :P). Otherwise you probably need /var/cache or /var/lib or somehting like that. /var/cache and /var/lib are forbidden for distributors, since they contain state information which shouldn't be touched (as always, there are exceptions...). If the dub packages contain architecture-independent stuff only, they need to go to /usr/share/dlang or /usr/include/d, and the shared or static library must go to /usr/lib//. If you split packages you could use the standard /usr/lib* folders but then you need to keep versioned subdirectories to support installing multiple versions. Multiple versions would be a rare event, since no distro package management system allows that (excluding Nix here, which is kind of special). Installing arbitrary arch-specific content into a subdirectory in /usr/lib is fine too, but I doubt that will be necessary... ## dub: How to install modules? Ideally, dub would also have a "dub install" command, to install the binary, "headers (.di)"/sources and data into standard directories on Linux. (reported as https://github.com/dlang/dub/issues/811 ) See above. The main question is: Do you want to have a C style install of one version of a library and have gdc find the includes and library when running gdc main.a -libfoo For plain gdc/ldc, I'd say: yes or do you want dub-style support for multiple library versions which means you need to compile everything through dub. For stuff using dub, I'd say yes to that too ;-) I'd love to have some extended compiler support (so you could simply do gdc -use=libfoo:1.0.0 and this would pick up the correct headers and linker flags). But as some DMD maintainers are opposed to this idea it won't happen. You'll probably always need dub for a 'nice' user interface. ++ Aside from these things, there are also some other things which would be very useful: ## Shared library support in all compilers This is mainly a GDC issue. DMD and LDC support shared libs, although I don't think these libraries are ABI compatible. Sounds more and more like LDC is - at time - the better choice over GDC...
Distributor's whishlist and questions for D
Hello! Me bringing dub to Debian (and subsequently Ubuntu) has sparked quite some interest in getting more D applications shipped in Linux distributions. Since I think D is a great language, I would welcome that - in order to get more D code into distributions though, it would be awesome to sort out a few issues which currently (seem to) exist. So I created this list, first to raise a bit of awareness on the issues that I see, and second - because I am very new to D - to allow others to point out solutions to those problems, or explain a different view on why some things are (not) done in a certain way (yet). While I am thinking from a Debian point of view, quite some stuff is relevant for other distros as well. So, let's get started: ## How complete are the free compilers? This is an important question, because we would need to know whether we can expect D code to be compiled by any compiler, or whether there are tradeoffs that must be made. This question is asking manly how complete LDC and GDC are compared to each other, but also whether both are implementing the D2 specification completely. The question here is also, which compiler should be the default (which IMHO would be the most complete, most bug-free actively maintained one ^^). ## Why is every D compiler shipping an own version of Phobos? If one assumes that the D2 language specification is stable, and D compilers implement it completely, the question is why every compiler is shipping an own copy of Phobos. This is a major pain, since e.g. GDCs Phobos is behind what is documented on dlang.org, but also because the compilers sometimes accidentally use the "wrong" Phobos version (e.g. GDC trying to use the one of LDC), leading to breakage. Also, distributors hate code duplication, so deduplicating this would be awesome (druntime being compiler-specific makes sense to me, Phobos not so much...). It's not an essential thing, but would be quite nice... ## Where should D source-code / D interfaces be put? If I install a D library or source-only module as a distribution package, where should the sources be put? So far, I have seen: * /usr/include/d * /usr/include/dlang/ * /usr/include/d/(dmd|gdc|ldc) * /usr/share/dlang Having one canonical path would be awesome ;-) ## dub: Where should dub modules be installed? DUB currently only searches online for it's packages, but if we install them as distributor, we want dub to also look locally in some path where the packages can be found, and - if possible - use a prebuilt shared/static library. (reported as https://github.com/dlang/dub/issues/811 ) ## dub: How to install modules? Ideally, dub would also have a "dub install" command, to install the binary, "headers (.di)"/sources and data into standard directories on Linux. (reported as https://github.com/dlang/dub/issues/811 ) ++ Aside from these things, there are also some other things which would be very useful: ## Shared library support in all compilers In distributions, we hate duplicating binary code copies. At time, D links everything statically, which means that when a bug is discovered in a library used by many other tools (worst case: the standard library), we will need to recompile all depending software. This is really annoying for the security team, while not being as bad as having actual duplicate source-code copies which need to be patched individually. ## Stable ABI among compilers Simple: Compile your library with GDC, have it used by code compiled with LDC. Otherwise, shared libraries would be of very limited use. Cheers, Matthias
Re: [Request] A way to extract all instance of X from a range
On Monday, 21 March 2016 at 11:50:06 UTC, Timothee Cour wrote: On Mon, Mar 21, 2016 at 4:34 AM, Nick Treleaven via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On 14/03/2016 11:32, thedeemon wrote: filter_map : ('a -> 'b option) -> 'a t -> 'b t "filter_map f e returns an enumeration over all elements x such as f y returns Some x, where y is an element of e." It is really convenient and comes handy in many situations. However it requires some common Option/Maybe type that different libraries could use. There is a pull for Option: https://github.com/D-Programming-Language/phobos/pull/3915 We could have: // fun takes r.front and produces an Option of that type auto mapFilter(alias fun, R)(R r); // turn a possibly null value into an Option Option!T nullFilter(T)(T v) if (isNullable!T); auto src = [new Object(), new T(), null]; auto res = mapFilter!(e => nullFilter(cast(T)e)); assert(res.equal([src[1]])); see my proposal [+implementation] for emit http://forum.dlang.org/post/mailman.538.1458560190.26339.digitalmar...@puremagic.com emit is more powerfull, and generalizes map,filter,joiner auto res = src.mapFilter!(e=>nullFilter(cast(T)e)); with emit: auto res = src.emit!((put,e){if(cast(T)e) put(e);}); Why don't you go for the well-established monadic bind function ? A rangified bind would take a range of inputs, a lambda returning a range of results for one input and return a range of all results. It is logically just a combination of map and concat (which turns a range of ranges into a combined range, but I think that one is missing in the std lib too).
Re: Compile-Time RNG
On Thursday, 21 January 2016 at 07:43:13 UTC, tsbockman wrote: On Thursday, 21 January 2016 at 01:49:27 UTC, Timon Gehr wrote: It only works because compile-time introspection is ill-defined though. I don't expect it to keep working indefinitely. That aspect can easily be replaced by __LINE__ and __FILE__. A supported way to generate new names at compile time would be nice.
Re: std.math: FloatingPointControl option to round to nearest + tie away from zero
On Friday, 11 December 2015 at 05:25:03 UTC, Shriramana Sharma wrote: https://en.wikipedia.org/wiki/IEEE_floating_point#Roundings_to_nearest says that IEEE 754 provides two options for rounding to nearest: ties to even and ties away from zero. However, under https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L4539 we have only one roundToNearest which, I presume, ties to even. Is there a difficulty in providing the option for tieing away from zero? Thanks. Those are basically the rounding modes supported by the FPUs in hardware. Any other rounding modes would have to be emulated in software, which would require the compiler to generate completely different code instead of just changing a few bits in an FPU control register.
Re: Signed integer overflow undefined behavior or not?
On Friday, 13 November 2015 at 09:33:51 UTC, John Colvin wrote: unsigned: f(v) = v mod 2^n - 1 signed: f(v) = ((v + 2^(n-1)) mod (2^n - 1)) - 2^(n-1) I guess you meant mod 2^n in both cases... If you look at how Mathematics deals with this issue, there is simply no signed or unsigned arithmetic modulo n, because they are exactly the same. There are only separate types in programming languages because the comparison operators are defined differently on them. Mathematicians don't define comparison on modular rings, because it is not possible to do so in a way that is consistent with the usual rules anyway (e.g. x+1 > x is always false for some x).
Re: Playing SIMD
On Sunday, 25 October 2015 at 19:37:32 UTC, Iakh wrote: Is it optimal and how do you implement this stuf? I think it's better to use PMOVMSKB to avoid storing the PCMPEQB result in memory and you need only one BSF instruction.
Re: Casting double to ulong weirdness
On Tuesday, 25 August 2015 at 15:19:41 UTC, Márcio Martins wrote: If you compile it with *GDC* it works fine. If you compile a port with clang, gcc or msvc, it works right as well. I suspect it will also work fine with LDC. The same program "fails" in gcc too, if you use x87 math. Usually C compilers allow excess precision for intermediate results, because the extra precision seldom hurts and changing precision on x87 is very expensive (depends on the CPU, but it is more expensive than the trigonometric functions on some models).
Re: Casting double to ulong weirdness
On Monday, 24 August 2015 at 16:52:54 UTC, Márcio Martins wrote: I'm posting this here for visibility. This was silently corrupting our data, and might be doing the same for others as well. import std.stdio; void main() { double x = 1.2; writeln(cast(ulong)(x * 10.0)); double y = 1.2 * 10.0; writeln(cast(ulong)y); } Output: 11 12 Internally the first case calculates x * 10.0 in real precision and casts it to ulong in truncating mode directly. As 1.2 is not representable, x is really 1.199956 and the result is trunc(11.99956) = 11. In the second case x * 10.0 is calculated in real precision, but first converted to double in round-to-nearest mode and then the result is truncated.
Re: std.data.json formal review
On Friday, 14 August 2015 at 09:20:14 UTC, Ola Fosheim Grøstad wrote: On Friday, 14 August 2015 at 08:03:34 UTC, Walter Bright wrote: 1. 'real' has enough precision to hold 64 bit integers. Except for the lowest negative value… (it has only 63 bits + floating point sign bit) actually the x87 format has 64 mantissa bits, although the bit 63 is always '1' for normalized numbers.
Re: Wait, what? What is AliasSeq?
On Friday, 17 July 2015 at 09:13:13 UTC, Daniel N wrote: On Friday, 17 July 2015 at 08:57:26 UTC, Marc Schütz wrote: On Thursday, 16 July 2015 at 15:50:15 UTC, Andrei Alexandrescu wrote: On 7/15/15 8:49 PM, Mike wrote: 1. "AliasSeq" is no good as evident from the first post that started this thread I am egging my face for starting this. Can we please return to AliasSeq? -- Andrei What about the Pack name? There was considerable support for it, and at least in this thread I haven't seen anyone opposing it. (And from my POV, *Seq is really the worst of all choices, as it has connotations in everyday language and other uses in mathematics that don't match TypeTuples at all, so I'm strongly opposed to it.) FWIW Pack > Seq Sack (a compromise between Seq and Pack)
Re: GDC adds intrinsic support for core.checkedint
On Friday, 3 July 2015 at 08:09:11 UTC, Robert burner Schadek wrote: On Friday, 3 July 2015 at 02:17:59 UTC, Timon Gehr wrote: On 07/03/2015 04:17 AM, Timon Gehr wrote: Bitwise are no math operations, these are CS operations What does that even mean? That there are no bitwise operations in math. You imply math == arithmetic, which is false. If you take a look at e.g. Algebraic Coding Theory, it basically uses bitwise operations to model polynominal arithmetic. If you want a mathematically correct representation of the integers modulo 2^n (which is what most closely resembles an int in math), you'll also have to disallow the less than and greater than operators, as they cannot be defined sensibly on a cyclic group.
Re: Phobos addition formal review: std.experimental.allocator
Can I use these allocators in @nogc code too ?
Re: wrong rounding
Interestingly dmd computes the a/b with SSE instructions if the result is cast to int, uint or long, but uses x87 instructions for the division if the result is cast to ulong.
Re: 0 is not a power of 2
I think you can make the over/underflow at zero work in your favor: bool isPowerOf2(uint x) { return (x & -x) > (x - 1); }
Re: 0 is not a power of 2
On Tuesday, 19 May 2015 at 05:16:48 UTC, Andrei Alexandrescu wrote: [...], but it wrongly returns true for x == 0. When we're talking about modulo 2^n arithmetic, 0 is in fact a power of two. Proof: 2^n mod 2^n == 0.
Re: DIP74 updated with new protocol for function calls
On Sunday, 1 March 2015 at 07:04:09 UTC, Zach the Mystic wrote: On Saturday, 28 February 2015 at 21:12:54 UTC, Andrei Alexandrescu wrote: Defines a significantly better function call protocol: http://wiki.dlang.org/DIP74 Andrei This is obviously much better, Andrei. I think an alternative solution (I know -- another idea -- against my own first idea!) is to keep track of this from the caller's side. The compiler, in this case, when copying a ref-counted type (or derivative) into a parameter, would actually check to see if it's splitting the variable in two. Look at this: class RcType {...} void fun(RcType1 c, RcType1 d); auto x = new RcType; fun(x, x); If the compiler were smart, it would realize that by splitting parameters this way, it's actually adding an additional reference to x. The function should get one x for free, and then force an opAdd/opRelease, for every additional x (or x derivative) it detects in the same call. This might be even better than the improved current proposal. The real key is realizing that duplicating an lvalue into the same function call is subtly adding a new reference to it. Eh?? Note that you can get the same issue without duplicate parameters, if you pass an alias to a global variable. static A a; void fun(A x) { a = null; // Releases x x.func(); } void main() { a = new A(); fun(a); }
Re: DIP74: Reference Counted Class Objects
When a function makes/destroys multiple references to an object it should always be safe to coalesce all AddRefs into the first AddRef and all Releases to into the last Release call. This could be a small performance win, but opAddRef/opRelease would need the count as argument or maybe as template parameter.
Re: Mac Apps That Use Garbage Collection Must Move to ARC
On Monday, 23 February 2015 at 12:30:55 UTC, Russel Winder wrote: On Mon, 2015-02-23 at 19:50 +1000, Manu via Digitalmars-d wrote: O[…] This is going to sound really stupid... but do people actually use exceptions regularly? I've never used one. When I encounter code that does, I just find it really annoying to debug. I've never 'gotten' exceptions. I'm not sure why error codes are insufficient, other than the obvious fact that they hog the one sacred return value. D is just a whisker short of practical multiple-return-values. If we cracked that, we could use alternative (superior?) error state return mechanisms. I'd be really into that. […] Return codes for value returning functions only work if the function returns a pair, the return value and the error code: it is generally impossible to work with return values that serve the purpose of return value and error code. C got this fairly wrong, Go gets it fairly right. You wouldn't need new syntax (though I think multiple returns would be a nice addition), I think you can compile try/catch exception syntax into error codes internally.
Re: Plan for Exceptions and @nogc?
On Wednesday, 18 February 2015 at 08:13:35 UTC, Ola Fosheim Grøstad wrote: On Wednesday, 18 February 2015 at 00:54:37 UTC, Chris Williams wrote: I didn't mean it as a solution. As said, I was just looking for an intro to the topic, so that I (and others) could meaningfully contribute or at least understand the options. I'll look up libunwind and, if that has enough info for me to grok it, create a wiki page on the subject. It is a horrible solution developed for the Itanium VLIW architecture which is very sensitive to branching. IRRC it basically works by looking at the return address on the stack, then picking up stack frame information in a global table to unwind. It is language agnostic and the language provides a "personality function" to unwind correctly in a language dependent manner... AFAIK, C++ exceptions are copied from the stack to a special memory region when unwinding to prevent the memory issues D suffers from. I agree that a fast unwind with stack pointer reset or multiple return paths would be much better, but you need to rewrite the backend to support it. That's the main issue... the "fast path" argument is just a sorry excuse that literally means that exceptions are avoided for common failures in C++. As a result you get APIs that are nonuniform. Windows SEH maintains a per-thread linked list of exception handlers, but the C++ runtime seems to install only one handler at the start of every function and resorts to lookup tables if there are multiply try{}s in the function. If you want to avoid lookup tables, you can of course add/remove catchers dynamically whenever you enter/leave a try block, that would add a small cost to every try, but avoids the (larger) table lookup cost on the catch.
Re: Plan for Exceptions and @nogc?
On Tuesday, 17 February 2015 at 20:48:07 UTC, Jonathan Marler wrote: That would work if you didn't have to unwind the stack but unfortunately you do. The catch block exists in the context of the function it is written in. That means it assumes the stack pointer and stack variables are all in the context of it's defining function. If you executed the catch code when the stack wasn't unwound, then it wouldn't know where any of the variables were. Does that make sense? Think about it for a minute. You proposal suggests that the catch code can be executed no matter how many child functions have been added to the stack. This is impossible since the catch code no longer knows where all of it's stack variables are. Normally it uses an offset to the stack pointer but now it has been changed. That's why you have to unwind the stack. So the catcher would have to behave like a delegate.
Re: Plan for Exceptions and @nogc?
On Tuesday, 17 February 2015 at 18:30:24 UTC, Jonathan Marler wrote: I thought of the same thing but then realized that it would be impossible to ensure that the catch block wouldn't stomp on that memory. The catcher wouldn't stomp any more on the thrower's memory than a function stomps on the memory of its caller. All the data of the thrower is safe, because it is above the stack pointer. The unwinding hasn't been done at that point.
Re: Plan for Exceptions and @nogc?
On Tuesday, 17 February 2015 at 17:38:20 UTC, Jonathan Marler wrote: The reason you can't keep the "thrower's" stack memory around for the exception handler is because the exception handler may need that memory. Once the exception is thrown the stack is unwound to the function that has the exception handler so all the memory gets released. In most cases the exception handler probably won't mess up the memory the exception is using, but that can't be guaranteed. The problem I see, is that if I program a @nogc function for performance reasons, I'll likely have some data in scoped memory that is useful for handling the exception. If the stack is unwound before the exception handler, the thrower has to copy it to non-scoped memory and the catcher has to deal with it whether it needs the data or not. If the unwinding is done after the exception handler is left, the thrower can safely reference the data directly on the stack and the catcher can ignore any data it doesn't need. (It may copy the data to safety if it is needed later, but the catcher knows what it needs, whereas the thrower has to always assume the worst case.)
Re: Plan for Exceptions and @nogc?
On Tuesday, 17 February 2015 at 12:41:51 UTC, Marc Schütz wrote: On Tuesday, 17 February 2015 at 09:19:55 UTC, Tobias Pankrath wrote: On Tuesday, 17 February 2015 at 07:24:43 UTC, Jonathan Marler wrote: Relaxing the definition of @nogc to exclude exceptions could be an acceptable compromise. However, the nature of an exception is that it is allocated when it is created, and deallocated after it is caught. This model fits nicely with scope semantics. The code catching the exception would catch a "scoped" reference which means it would be responsible for cleaning up the exception. It would still be allocated on the heap, but it wouldn't be GC memory. This is how I think exceptions should have been implemented in the first place, but back then the GC wasn't a big deal and scope didn't exist yet. This actually puts scope on the head. It's unique / ownership you're looking for (if at all). Right. But `scope` still has a place in it. It would either be necessary to allow throwing and catching the unique/owned types (instead of `Throwable`), but that would be quite a change to the language. Or the runtime manages the exceptions (freeing them as soon as they are no longer needed). In that case, the exceptions must not leave the `catch` blocks, which is what `scope` guarantees. Maybe it is possible to have a separate ScopedThrowable exception class. Those exceptions would be allocated on the stack and would be allowed to carry references to local/scoped data, but they live only for the duration of the corresponding exception handler. The compiler should check that the exception and its payload don't escape the catch block, and of course the exception handler has to run before the stack unwinding is done. The whole point is of course that ScopedThrowables could be thrown from @nogc functions.
Re: Unreachable statement, do or do not, there is no try
On Tuesday, 10 February 2015 at 20:37:16 UTC, Walter Bright wrote: On 2/9/2015 11:07 PM, deadalnix wrote: On Tuesday, 10 February 2015 at 07:01:18 UTC, Jacob Carlborg wrote: DMD will complain about the second example if warnings are enabled. Ok I think that answer the question. The thing is, you can still catch an Object, so catching Exception won't quite catch them all. s/Object/Error/ ?
Re: forcing "@nogc" on class destructors
On Wednesday, 28 January 2015 at 09:51:09 UTC, Ola Fosheim Grøstad wrote: Some languages keep track of parent-child relationships, you can do it in the typing even. Nevertheless, children ought to be alive when the parent dies... If the language cannot provide this, then provide another mechanism such as "finalize" or just disallow GC allocating destructor based classes. Mish-mashing established programming concepts is Not a Good Idea. :) But wouldn't enforcing strict parent-child relationships make cyclic references illegal ?