Re: gdc or ldc for faster programs?
On Monday, 31 January 2022 at 08:54:16 UTC, Patrick Schluter wrote: -O3 often chooses longer code and unrollsmore agressively inducing higher miss rates in the instruction caches. -O2 can beat -O3 in some cases when code size is important. That is generally true. My point is that GCC and Clang make different tradeoffs when told '-O2'; Clang is more aggressive than GCC at -O2. I don't know if that still holds at -O3 (I expect probably not).
Re: gdc or ldc for faster programs?
On Tuesday, 25 January 2022 at 22:33:37 UTC, H. S. Teoh wrote: interesting because idivl is known to be one of the slower instructions, but gdc nevertheless considered it not worthwhile to replace it, whereas ldc seems obsessed about avoid idivl at all costs. Interesting indeed. Two remarks: 1. Actual performance cost of div depends a lot on hardware. IIRC on my old intel laptop it's like 40-60 cycles; on my newer amd chip it's more like 20; on my mac it's ~10. GCC may be assuming newer hardware than llvm. Could be worth popping on a -march=native -mtune=native. Also could depend on how many ports can do divs; i.e. how many of them you can have running at a time. 2. LLVM is more aggressive wrt certain optimizations than gcc, by default. Though I don't know how relevant that is at -O3.
Re: RFC to: my need for 'static switch' and CT 'static variables'
static if (...) { } else static if (...) { } else { static assert(0); }
Re: Attributes (lexical)
On Thursday, 25 November 2021 at 08:06:27 UTC, rumbu wrote: Is that ok or it's a lexer bug? @ (12) does exactly what I would expect. @nogc I always assumed was a single token, but the spec says otherwise. I suppose that makes sense. #line is dicier as it is not part of the grammar proper; however the spec describes it as a 'special token sequence', and comments are not tokens, so I think the current behaviour is correct.
Re: Is DMD still not inlining "inline asm"?
On Thursday, 11 November 2021 at 13:22:15 UTC, Basile B. wrote: As for now, I know no compiler that can do that. GCC can do it. Somewhat notoriously, LTO can lead to bugs from underspecified asm constraints following cross-TU inlining.
Re: abs and minimum values
On Sunday, 31 October 2021 at 10:32:50 UTC, Imperatorn wrote: What I would like is for it to mirror math. Use bigints.
Re: Does associative array change the location of values?
On Sunday, 31 October 2021 at 02:56:35 UTC, Ali Çehreli wrote: On 10/30/21 3:47 PM, Elronnd wrote: > If the GC were moving, it would also have to move the pointers you took > to AA elements. I doubt D's GC can ever change pointer values because the values may be hiding inside e.g. ulong variables. And we would definitely not want the GC to change ulong values just because it thought they were familiar pointer values in disguise. :) Precise GC exists now.
Re: Does associative array change the location of values?
On Saturday, 30 October 2021 at 21:20:15 UTC, Stanislav Blinov wrote: On Saturday, 30 October 2021 at 20:19:58 UTC, Imperatorn wrote: https://dlang.org/spec/garbage.html#pointers_and_gc What test could be written to verify the behaviour? Assuming the GC was moving? If the GC were moving, it would also have to move the pointers you took to AA elements. You would never get stale pointers in any event.
Re: Why do we have Dmain?
On Friday, 22 October 2021 at 09:01:53 UTC, Kagamin wrote: Actually C runtime is many megabytes in size. A couple of samples: $ wc -c /usr/lib/libc-2.33.so 2150424 /usr/lib/libc-2.33.so % wc -c /lib/libc.so.7 1981952 /lib/libc.so.7 I would hardly call two megabytes 'many'.
Re: How to make a function that accepts optional struct but can accept struct literal too
On Friday, 15 October 2021 at 21:47:21 UTC, Paul Backus wrote: static global(alias value) = value; I fear there will be issues with reentrancy.
Re: How to test if a string is pointing into read-only memory?
On Tuesday, 12 October 2021 at 09:20:42 UTC, Elronnd wrote: problematic wrt threading Not to mention signals. Reentrancy's a bitch.
Re: How to test if a string is pointing into read-only memory?
On Tuesday, 12 October 2021 at 08:19:01 UTC, jfondren wrote: What's a reliable test that could be used in a toStringz that skips allocation when given a string in read-only memory? There is no good way. - You could peek in /proc, but that's not portable - You could poke the data and catch the resulting fault; but that's: 1) horrible, 2) slow, 3) problematic wrt threading, 4) sensitive to user code mapping its own memory and then remapping as rw (or unmapping) - You could make a global hash table into which are registered the addresses of all rodata; but that is difficult to get right across translation units, especially in the face of dynamic linking. This is probably the most feasible, but is really not worth the hassle.
Re: automatic NaN propogation detection?
On Saturday, 25 September 2021 at 07:53:11 UTC, Chris Katko wrote: Is that some sort of "NP complete" can't-fix issue or something? The general case is obviously unsolvable. Trivial proof: float x = nan; if (undecidable) use x. I'm sure your imagination can supply more realistic cases (but I promise they really do come up!) However that doesn't mean we can't do flow analysis conservatively. Similar caveats apply to @live, but that does not mean it is useless. (Not saying @live isn't useless, just that this doesn't indicate that.) The bigger problem is that it really is hard to tell if you 'meant' to use a nan somewhere or not. And if you tried to apply such a semantic analysis pass to existing code, you would find it riddled with false positives, where a float was default-initialized to nan and the compiler was unable to convince itself the variable was overridden before being used in all paths.
Re: Development of the foundation of a programming language
On Tuesday, 14 September 2021 at 03:24:45 UTC, max haughton wrote: On Tuesday, 14 September 2021 at 03:19:46 UTC, Elronnd wrote: On Monday, 13 September 2021 at 11:40:10 UTC, max haughton wrote: The dragon book barely mentions SSA for example In fairness, dmd doesn't use SSA either That's not a good thing. No, but if the OP's goal is to contribute to dmd, learning SSA wouldn't be very helpful beyond a general acclimation to compiler arcana. (Unless they wish to add SSA to dmd--a worthy goal, but perhaps not the best thing to start out with.)
Re: Development of the foundation of a programming language
On Monday, 13 September 2021 at 11:40:10 UTC, max haughton wrote: The dragon book barely mentions SSA for example In fairness, dmd doesn't use SSA either
Re: how to filter associative arrays with foreach ?
On Monday, 21 June 2021 at 03:59:10 UTC, someone wrote: Is there a way to filter the collection at the foreach-level to avoid the inner if ? Here's how I would do it: foreach (k, v; coll) { if (k == unwanted) continue; ... } You still have an if, but the actual loop body doesn't have to be nested, so it's easy to follow the control flow.
Re: noobie question, dub or meson?
Meson doesn't track dependencies properly for d, so your dirty builds will be wrong if you go that route. You might consider keeping the c and d code in the same repository, but with separate build systems; using dub to build the d code and and whatever tool you prefer for c. Or try reggae.
Re: Developing and running D GUI app on Android
On Monday, 11 January 2021 at 06:26:41 UTC, evilrat wrote: Android itself is just linux under the hood, however the launcher starts java process that fires up your activity class (main in native languages) from there you just call your native code and that's it. It turns out that you don't strictly need the java wrapper. See https://github.com/cnlohr/rawdrawandroid
Re: Surprising behaviour of std.experimental.allocator
On Thursday, 24 December 2020 at 23:46:58 UTC, Elronnd wrote: reduced version: Further reduction: Alloc1 can just be ‘AllocatorList!(n => Region!Mallocator(MB))’.
Re: Surprising behaviour of std.experimental.allocator
On Thursday, 24 December 2020 at 16:12:31 UTC, Saurabh Das wrote: This causes a segfault when run with rdmd -gx: *snip* First, here's a reduced version: void main() { import std.experimental.allocator: allocatorObject, expandArray; import std.experimental.allocator.building_blocks.allocator_list: AllocatorList; import std.experimental.allocator.building_blocks.region: Region; import std.experimental.allocator.building_blocks.fallback_allocator: FallbackAllocator; import std.experimental.allocator.mallocator: Mallocator; import core.memory: GC; import std.stdio; enum MB = 1024 * 1024; { alias Alloc1 = FallbackAllocator!( AllocatorList!(n => Region!Mallocator(MB)), Mallocator); auto alloc1 = allocatorObject(Alloc1()); GC.collect; alloc1.allocate(MB); } writeln(5); // this never gets printed; segfault happens upon exiting the above scope } I'm not 100% sure where the segfault comes from--though I think it's a problem with AllocatorList--but as a workaround, try replacing ‘AllocatorList!(n => Region!Mallocator(MB))’ with ‘AllocatorList!(n => Region!Mallocator(MB), NullAllocator)’.
Re: Is garbage detection a thing?
On Sunday, 29 November 2020 at 16:05:04 UTC, Mark wrote: I have no good understanding why "garbage collection" is a big thing and why "garbage detection" is no thing (I think so). Because it's just as expensive to do garbage detection as automatic garbage collection. So if you're going to go to the work of detecting when something is garbage, it's basically free to detect it at that point. today there exist tools for C++ (allocator APIs for debugging purposes, or Address-Sanitizer or maybe also MPX) to just detect that your program tried to use a memory address that was actually freed and invalidated, Note that address sanitizer is significantly slower than most ‘real’ GCs (such as are used by java, or others). why did Java and other languages not stop there but also made a system that keeps every address alive as long as it is used? Then the debug build would use a virtual machine that uses type information from compilation for garbage detection, but not garbage collection. Address sanitizer does exactly what you propose here. The problem is this: Testing cannot prove only the presence of bugs; never their absence. You may run your c++ program a thousand times with address sanitizer enabled and get no errors; yet, your code may still be incorrect and contain memory errors. Safety features in a language--like a GC--prevent an entire class of bugs definitively. One very minor criticism that I have is: With GC there can be "semantically old data" (a problematic term, sorry) which is still alive and valid, and the language gives me the feeling that it is a nice system that way. But the overall behavior isn't necessarily very correct, it's just that it is much better than a corrupted heap which could lead to everything possibly crashing soon. The distinction here is _reachability_ vs _liveness_. So, GC theory: A _graph_ is type of data structure. Imagine you have a sheet of paper, and on the sheet of paper you have a bunch of dots. There are lines connecting some of the dots. In graph theory, the dots are called nodes, and the lines are edges. We say that nodes A and B are _connected_ if there is an edge going between them. We also say that A is _reachable_ from B if either A and B are connected, or A is connected to some C, where C is reachable from B. Basically, if you can reach one point from another just by following lines, then each is reachable from the other. A _directed_ graph is one in which the edges have directionality. Imagine the lines have little arrows at the ends. There may be an edge that goes A -> B; or there may be an edge that goes B -> A. Or there may be both: A <-> B. (Or they can be unconnected.) In this case, to reach one node from another, you have to follow the arrows. So it may be that, starting at A, you can reach B; but you can't go the other way round. The _heap_ is all the objects you've ever created. This includes the objects you allocate with 'new', as well as all the objects you allocate from the stack and all your global variables. What's interesting is that we can think of the heap as a directed graph. If object A contains a pointer to object B, we can think of that the same as way as there being an edge going from node A to node B. The _root set_ is some relatively small number of heap objects that are always available. Generally, this is all the global variables and stack-allocated objects. The name _reachable_ is given to any object which is reachable from one of the root set. It is impossible for your program to access an unreachable object; there's no way to get a pointer to it in the first place. So it is safe for the GC to free unreachable objects. But we can also add another category of objects: _live_ vs _dead_ objects. Live objects are ones which you're actually going to access at some point. Dead objects are objects that you're never going to access again, even if they're reachable. If a GC could detect which reachable objects were dead, it would be able to be more efficient and use less memory...hypothetically. The reason this distinction is important, and the reason I bring up graph theory, is that liveness is impossible to prove. Seriously: it's impossible, in the general case, for the GC to prove that an object is still alive. Whereas it's trivial to prove reachability. Now, it is true that there are some cases where an object is dead but still reachable. The fact of the matter is that in most such cases, the object becomes unreachable shortly thereafter. In the cases when it's not, it tends to be impractical to prove an object is dead. The extra work that it would take to prove deadness in such cases, if it were even possible to prove, would make it a not worthwhile optimization. And when I have tested all runtime cases of my compiled software, which runs slow, but quite deterministically, I will go on and build the release b
Re: I need "windowsx.d" Someone can send It to me?
On Sunday, 27 September 2020 at 18:30:10 UTC, Imperatorn wrote: I converting it using VisualD: https://pastebin.com/jzwKRnKZ Try it, maybe it works Somehow, I don't think this is going to fly: static if(__cplusplus) { extern (C) {/* Assume C declarations for C++ */ } /* __cplusplus */
Re: Why is "delete" unsafe?
On Wednesday, 23 September 2020 at 04:15:51 UTC, mw wrote: What do you mean by saying "it's definitely not safe" here? I mean: if I'm careful and know what I'm doing, e.g. remove all the reference to any part of the `object` before call core.memory.GC.free(object), is there still any inherit "unsafe" side of `free` I should be aware of? '"delete" is unsafe' doesn't mean 'any program which uses "delete" is unsafe'. What it means is, in a language that has 'delete', /some/ programs will be unsafe. If your language has delete, you cannot guarantee that programs will be safe. Now, D is not safe by default ('delete' would definitely be disallowed in @safe code), but it still wants to have features that /encourage/ safety (fat pointers are a great example of this). 'delete' and 'free' are both equally unsafe; however, if you have verified that your usage of them is safe, it is fine to use them.
Re: Lambda capture by value
printers[i] = () { write(i); }; I know it looks silly but if you make that: printers[i] = (int i) { return () { write(i); }; }(i); it will do what you want. Or, the slightly prettier (imo) printers[i] = ((i) => () => write(i))(i); Or, if you need to force it to return void: printers[i] = ((i) => {write(i);})(i);
Re: Using tasks without GC?
You can control when the gc runs. So if you know the allocations are small enough that they won't OOM, then you can say GC.disable, and it straight up won't run at all. But you can manually run a collection cycle (during a loading screen or whatever) with GC.collect. See http://dpldocs.info/experimental-docs/core.memory.GC.html
Re: how to implement a function in a different D source file
On Tuesday, 26 November 2019 at 03:06:52 UTC, Omar wrote: the page here https://dlang.org/spec/function.html suggests you can implement a function in a different file, and a different tutorial somewhere else mentioned the endeavour of no-bodied-functions as a way of presenting a black-box type of interface. This is not a common pattern in d; the only reason it's used in c and c++ is that those languages don't have a real module system. However, the way it's done is with .di (d interface) files. Consider: test.di: module test; void print_stuff(); main.d: import test; void main() { print_stuff(); } test.d: module test; void print_stuff() { import std.stdio; writeln("I'm stuff"); } You can verify the compiler is reading from test.di by putting test.d in a different directory from main.d and test.di.
Re: Any 3D Game or Engine with examples/demos which just work (compile&run) out of the box on linux ?
Up to now I was able to compile just "First Triangle example" https://www.dropbox.com/sh/myem3g69qjyo58v/AABZuvwuRDpnskhEC4AAK5AVa?dl= Why not start with that, then, and expand it until it has everything you need? If it helps, the basic gl startup code for my engine is at http://ix.io/1Z2X/d and http://ix.io/1Z2Y/d (engine is closed-source, but feel free to steal that part for w/e you want). Obviously depends on a bunch of other infrastructure, but should be enough to get the idea. In the second link, you probably only want the constructor; everything else is irrelevant windowing stuff or boilerplate. I guess I'm mostly confused as to what your roadblock is: once you've imported derelict.opengl, you have all the same opengl functions as you would have in c++; once you've imported derelict.sdl, you have all the same sdl functions as in c++. The only thing I can think of aside from that is math or gui libs--but in your c++ engine, it looks like you implemented those from scratch. By the way, I recommend using bindbc (https://github.com/bindbc) wrappers over derelict ones, if available, because mike isn't really maintaining derelict anymore. He was pretty much the sole maintainer. I'm currently using a weird branch of derelictsdl for vulkan support, and a custom version of derelictassimp so it doesn't break.
Re: Dub version
Dub add is not supported in dub 1.11.0 Use 'dub fetch'.
Re: Creating a dynamic library
dmd bla.d bla2.d -shared -fPIC -oflibbla.so
Re: Is prime missing in photos?
Well the purpose of the exercise kind of *is* to write a prime number generator. You can look up prime number sieves and algorithms. For REALLY large numbers, that takes an insane amount of time, and you can instead use algorithms such as the ones outlined at https://csrc.nist.gov/csrc/media/publications/fips/186/3/archive/2009-06-25/documents/fips_186-3.pdf (appendix C.3)
Re: Is it possible to avoid call to destructor for structs?
Here's a simple solution. Just make Bar a pointer and free it before it can be destructed! import std.stdio; struct Bar { ~this() { writeln("~bar"); } } struct Foo { Bar *bar; this(int why_the_fuck_dont_structs_have_default_constructors) { bar = new Bar; } ~this() { writeln("~foo"); import core.memory; GC.free(bar); } }
Re: Is it possible to avoid call to destructor for structs?
Here's a simple solution. Just make Bar a pointer and free it before it can be destructed! import std.stdio; struct Bar { ~this() { writeln("~bar"); } } struct Foo { Bar *bar; this(int why_the_fuck_dont_structs_have_default_constructors) { bar = new Bar; } ~this() { writeln("~foo"); import core.memory; GC.free(bar); } }
Re: General performance tip about possibly using the GC or not
On Tuesday, 29 August 2017 at 00:52:11 UTC, Cecil Ward wrote: I don't know when the GC actually gets a chance to run. Another alternative that I *think* (maybe someone who knows a bit more about the gc can chime in?) would work is if you manually stopped the gc then ran collections when profiling shows that your memory usage is high. To get GC functions, "import core.memory". To stop the GC (put this at the top of main()) "GC.disable()". To trigger a collection, "GC.collect()". That way you don't have to manually free everything, there's just one line of code.
Re: Primality test function doesn't work on large numbers?
Thank you! Would you mind telling me what you changed aside from pow() and powm()? diff isn't giving me readable results, since there was some other stuff I trimmed out of the original file. Also, while this is a *lot* better, I still get some lag generating 1024-bit primes and I can't generate larger primes in a reasonable amount of time. Maybe my genbigint() function is to blame? It isn't efficient: bigint genbigint(int numbits) { bigint tmp; while (numbits --> 0) { tmp <<= 1; tmp += uniform(0, 2); } return tmp; }
Primality test function doesn't work on large numbers?
I'm working on writing an RSA implementation, but I've run into a roadblock generating primes. With a more than 9 bits, my program either hangs for a long time (utilizing %100 CPU!) or returns a composite number. With 9 or fewer bits, I get primes, but I have to run with a huge number of iterations to actually _get_ a random number. It runs fast, though. Why might this be? Code: http://lpaste.net/1034777940820230144