Re: xlsxd: A Excel xlsx writer
On Monday, 12 November 2018 at 10:38:28 UTC, Robert Schadek wrote: On Saturday, 10 November 2018 at 10:55:04 UTC, Dave wrote: Could you please elaborate a bit on your workflow for D with Vim? E.g. what do you use for debugging, refactoring, ... ? I had a lot of functions looking like this void chart_axis_set_name(lxw_chart_axis* handle, const(char)* name) I had to transform into void setName(string name) { chart_axis_set_name(this.handle, toStringz(name)); } For that I created a handful of (neo)vim macros that basically it the transformations for me. On my neovim setup. I use dutly. Dscanner generates ctags recursively when I press F7. Which Ctrl-P uses for jump marks. I use kdbg to debug, its just a somewhat pretty frontend to gdb. Thats pretty much it. Many thank! I have followed your setup and it helps :-)
Re: D compilation is too slow and I am forking the compiler
On Thursday, 29 November 2018 at 09:18:35 UTC, Laeeth isharc wrote: The innovator's dilemma, which is really an insight that dates back to Toynbee, and before that Ibn Khaldun, is not so obvious. I am not sure that you have understood it. I suggest reading the book if you are interested, but otherwise I unfortunately don't have so much time at the moment to try to persuade you of what this phenomenon is like and there's limited value to talking about talking rather than having a discussion based on a shared understanding of what this is about. No, indeed I've not read about it (yet, it looks like a tale of our lives). My market has indeed a distribution where the best customer brings 3:1 vs the average, it's not inf:1 like it would be in programming language "customers". "Impulse buy" is predominent too, which does not exist for technical decisions. I understand that a few big players will bring a lot more than hundreds of smaller ones. Especially if the smaller ones keep writing on the D newsgroup :) In my market too, I prefer if D is kind of unpopular! I don't tell details to competitors other than "works for me". And they aren't much interested, which is satisfying. This secrecy has to be balanced by the fact we have to hire, and D being a critical piece of infrastructure I'd like it to simply receive more money, being a small player I contribute what I can to the Foundation - and direct others that came to Dplug to do so.
Re: LDC 1.13.0-beta2
On Thursday, 22 November 2018 at 16:54:55 UTC, Joakim wrote: On Thursday, 22 November 2018 at 16:36:22 UTC, H. S. Teoh wrote: On Thu, Nov 22, 2018 at 01:25:53PM +, Joakim via Digitalmars-d-announce wrote: On Wednesday, 21 November 2018 at 10:43:55 UTC, kinke wrote: > Glad to announce the second beta for LDC 1.13: > > * Based on D 2.083.0+ (yesterday's DMD stable). [...] I've added native builds for Android, including Android/x86_64 for the first time. Several tests for std.variant segfault, likely because of the 128-bit real causing x64 codegen issues, but most everything else passes. [...] What's the status of cross-compiling to 64-bit ARM? On the wiki you wrote that it doesn't fully work yet. Does it work with this new release? It's been mostly working since 1.11. That note on the wiki links to this tracker issue that lists the few remaining holes, mostly just extending Phobos support for 80-bit precision out to full 128-bit Quadruple precision in a few spots and finishing off the C/C++ compatibility: https://github.com/ldc-developers/ldc/issues/2153 Btw, if you ever want to check the current status of the AArch64 port, all you have to do is look at the logs for the latest run of the ldc AArch64 CI, which kinke setup and is run for every ldc PR, on this dashboard: https://app.shippable.com/github/ldc-developers/ldc/dashboard Clicking on the last job on the master branch, expanding the build_ci output in the log, then doing the same for the stdlib tests, I see only five Phobos modules with failing tests. Three are mentioned in the tracker issue above, while std.complex has a single assert that trips, because it's a few bits off at 113-bit precision, which is still much more accurate than the 64-bit precision (or less) it's normally run at on x86/x64. Also, a single assert on std.algorithm.sorting trips for the same reason as a handful of tests in std.math: -real.nan at compile-time is output as real.nan by ldc running natively on AArch64, though not when cross-compiling. std.internal.math.gammafunction works fine at 64-bit precision on AArch64, but only a couple of the 100 or so constant reals it uses are at full 113-bit precision, so several tests assert that only allow a couple bits to be off from full real precision. Obviously that only matters if you need full 113-bit precision from that module. kinke recently disabled the tests for core.thread on the CI because they're super-flaky on linux/glibc/AArch64, while I haven't had that problem with Bionic/AArch64. You will see more tests failing if you cross-compile from x64, because of the mismatch between 64-bit precision for compile-time reals and 113-bit precision for runtime reals on AArch64. Also, you can see the 10-12 modules that assert in the dmd compiler testsuite earlier in that log, most because of missing core.stdc.stdarg.va_arg support to call C varargs on AArch64. That's about it: help is appreciated on tightening those last few screws.
Re: D compilation is too slow and I am forking the compiler
On Wednesday, 28 November 2018 at 13:30:37 UTC, Guillaume Piolat wrote: On Wednesday, 28 November 2018 at 12:48:46 UTC, Laeeth Isharc wrote: Nassim Taleb raises the question of how do you choose between two surgeons, both recommended. One looks the part and hangs his many certificates on his office wall. The other looks scruffy with the appearance of a tradesman. Who do you pick? Taleb says pick the guy who doesn't look the part because if he got there without signalling he must have something going for him. It's definately the kind of surgeon one should choose - programmers that are not necessarily well groomed etc.. - but is it the kind of surgeon people will actually recommend? I'm doubtful. If X has the social signalling then people will recommend X even without trying, because it's socially safe. If one doesn't have the signalling, I've found the hard way even supporters will hesitate a bit before making recommendations, because of the social standing _cost_ it may have. But then, perhaps recommendations don't matter, because opinions don't matter much? I think they matter to be even heard on public places. And I think early adopters need a nudge, the influent need to be bothered by less influents (influencers are not especially on the lookout for new options, as they are already influent). Above all I think the niche of early-adopters is smaller than the larger market for languages, and the early-adopters are going elsewhere. The innovator's dilemma, which is really an insight that dates back to Toynbee, and before that Ibn Khaldun, is not so obvious. I am not sure that you have understood it. I suggest reading the book if you are interested, but otherwise I unfortunately don't have so much time at the moment to try to persuade you of what this phenomenon is like and there's limited value to talking about talking rather than having a discussion based on a shared understanding of what this is about.
Re: D compilation is too slow and I am forking the compiler
On Wednesday, 28 November 2018 at 13:05:34 UTC, Guillaume Piolat wrote: On Wednesday, 28 November 2018 at 12:48:46 UTC, Laeeth Isharc wrote: D isn't really marketed and it's definitely not sold. That's an implicit strategy in itself. What I see in my (absurdly competitive) market is that the people that truly do no-marketing tend to close shop, sometimes despite very competitive offerings. It colors my perception of course, since it can be very tempting to appeal to a limited pool of discerning customers; but that would mean death. What is the ratio of expenditure of your best customer to an average customer? Not much. That's one main reason why your intuition developed by organising your emotions according to your business domain fits this domain less. What is the ratio of expenditure of the biggest 'customer' of Python to the average 'customer'? Measured by resources lent to the community directly or indirectly, or by the wage bill of programmers at that firm working in Python this ratio is enormous.
(Oh My) Gentool 0.0.2 released
(Oh My) Gentool - Yet another C/C++ binding generator. Comparing to previous release there is a whole load of improvements in nearly all aspects, it is still in its early stage though. Much cleaner output with less garbage comparing to last release. If you used previous version and it was sucks, give it a second try, it should suck much less now! There is still lot of issues and quirks, some execution paths have hard-coded features, such as integer-to-poiter (0 to null) replacement works in parameters but not in inlined code. Some template heavy libs is not yet usable, such as FBX SDK. And so is Bullet3, it uses too much highly C++ specific stuff and optimizations, need more work to make it usable. Quick stats (library/generated lines): Bullet3 - 45k + 1.8k mangled names(wow) UNUSABLE FBX SDK 2017 - 32.5k UNUSABLE (dear) ImGui - 2.7k PhysX 3.3.4 - 15k (not tested, but seems valid after fixes) libuv - 3k recast/detours - 2k ultralight v0.9 - 2.3k (note though most of this is not tested so these stats are just random numbers of possibly broken garbage. And so one can draw the line, anything above 15k generated locs is unusable yet.) Only imgui is actually tried, works after a bunch of fixes, templates not tested though. How to start - https://github.com/Superbelko/ohmygentool/wiki/QuickStart Code https://github.com/Superbelko/ohmygentool Binaries https://github.com/Superbelko/ohmygentool/releases/tag/v0.0.2