Chimpfella - new library to do benchmarking with ranges (even with templates!)
https://code.dlang.org/packages/chimpfella Haven't finished documenting it yet. This uses enormous amounts of static this and that and templates, so expect vague error messages (I have tried to catch obvious errors early using static asserts but they aren't magic). This will soon support Linux's perf_event so you will be able to measure cache misses (and all the other thousands of pmc's intel expose), use LBR msrs etc. Quick code sample: If for some reason you wanted to measure how many CPUID's it takes to make your cpu literally useless, you'd write this code (the ctfeRepeater helper function is because dmd doesn't like map at compile time) static string ctfeRepeater(int n) { return "cpuid;".repeat(n).join(); } enum cpuidRange = iota(1, 10).map!(ctfeRepeater).array; @TemplateBenchmark!(0, cpuidRange) @FunctionBenchmark!("Measure", iota(1, 10), (_) => [1, 2, 3, 4])(meas) static int sum(string asmLine)(inout int[] input) { //This is quite fun because ldc will sometimes get rid of the entire function body and just loop over the asm's int tmp; foreach (i; input) { mixin("asm { ", asmLine, ";}"); } return tmp; }
Re: DLP - D Language Processing 0.3.0 - infer attributes
On Friday, 18 December 2020 at 20:07:25 UTC, Jacob Carlborg wrote: I would like to announce a new release of DLP, 0.3.0. [...] Looks very useful. Something strange I noticed though, it seems to be doing the analysis in 32 bits? I had some static assertions go off due to this which halted the program. I’m using a mac on with macOS 10.15 if that changes anything.
DLP - D Language Processing 0.3.0 - infer attributes
I would like to announce a new release of DLP, 0.3.0. For those not familiar with DLP, it's a tool collecting commands/tasks related to processing the D programming language. It uses the DMD frontend as a library to process D code. The major new feature in this release a new command that has been added: `infer-attributes`. This command will print the inferred attributes of all functions that are normally not inferred by the compiler. These are regular functions and methods. Templates, nested functions and lambdas are inferred by the compiler and will not be included by this command. By default, virtual methods are not inferred. Use the flag `--include-virtual-methods` to enable inferring of virtual methods. The attribute inference is enabled only on the file that is currently being processed. This is to avoid the tool outputting attributes that would not be valid unless other files are modified. For example, your code is calling a third-party function. The third-party function is inferred to be `pure`. Now your function is inferred to be `pure` as well. But if you only change your function to be `pure`, it will fail to compile because the third-party function has not been updated. The intention of this command is to help you in adding attributes to your code. You don't have to figure out exactly which attributes you can add to a function, just have the compiler tell you instead. It's recommend to run DLP in an iterative process. Run it on a file, update the file with the new attributes, run it again and repeat the process until the tool doesn't output any more inferred attributes. The release is available here [1]. Pre-compiled binaries are available for macOS, FreeBSD and Linux 64bit and Windows 32bit and 64bit. For the full change log, see [1]. [1] https://github.com/jacob-carlborg/dlp/releases/tag/v0.3.0 -- /Jacob Carlborg
Re: Truly algebraic Variant and Nullable
On Thursday, 17 December 2020 at 15:38:52 UTC, jmh530 wrote: On Thursday, 17 December 2020 at 15:12:12 UTC, 9il wrote: On Wednesday, 16 December 2020 at 16:14:08 UTC, jmh530 wrote: On Wednesday, 16 December 2020 at 15:58:21 UTC, 9il wrote: [...] What about making it into a sub-package, as in here [1]? [1] https://github.com/atilaneves/unit-threaded/tree/master/subpackages It takes 0.1 seconds to compile mir-core with LDC in the release mode. It is almost all generic and quite fast to compile. We can, but dub doesn't always work well with submodules. Is there any other reason except compilation speed? You can put it on code.dlang.org as a subPackage and people can download it without downloading all of mir-core. See below: https://code.dlang.org/packages/unit-threaded dub downloads the whole package if just a subpackage is required. The size of mir-core is less then 0.5 mb and 0.1 mb in Zip archive.