On Wednesday, 4 May 2022 at 05:37:49 UTC, forkit wrote:
    inscope int[] i = new int[10000];

You often see the "here's an array of ints that exists only in one scope to do one thing, should we leave it floating in memory or destroy it immediately?" as examples for these GC discussions. Not to steal OP's thread and whatever particular needs he's trying to achive, but hopefully provide another use case: I write games, and performance is the number one priority, and I stumbled heavily with the GC when I first began writing them in D. Naively, I began writing the same types of engines I always did, and probably thinking with a C/C++ mentality of "just delete anything you create", with a game loop that involved potentially hundreds of entities coming into existence or being destroyed every frame, in >=60 frame per second applications. The results were predictably disastrous, with collections running every couple seconds, causing noticeable stutters in the performance and disruptions of the game timing. It might have been my fault, but it really, really turned me off from the GC completely for a good long while.

I don't know what types of programs the majority of the D community writes. My perception, probably biased, was that D's documentation, tours, and blogs leaned heavily towards "run once, do a thing, and quit" applications that have no problem leaving every single thing up to the GC, and this wasn't necessarily a good fit for programs that run for hours at a time and are constantly changing state. Notably, an early wiki post people with GC issues were directed to revolved heavily around tweaks and suggestions to work within the GC, with the malloc approach treated as a last-resort afterthought.

Pre-allocating lists wasn't a good option as I didn't want to set an upper limit on the number of potential entities. The emergency fix at the time was inserting GC.free to forcibly deallocate things. Ultimately, the obvious *correct* answer is just using the malloc/emplace/free combo, but I'm just disappointed with how ugly and hacky this looks, at least until they've been wrapped in some nice NEW()/DELETE() templates.
```d
auto foo = new Foo;
delete foo; // R.I.P.
```
```d
import core.stdc.stdlib : malloc, free;
import core.lifetime : emplace;
auto foo = cast(Foo) malloc(__traits(classInstanceSize, Foo));
emplace!Foo(foo);
destroy(foo);
free(cast(void*) foo);
```
Can you honestly say the second one looks as clean and proper as the first? Maybe it's a purely cosmetic quibble, but one feels like I'm using the language correctly (I'm not!), and the other feels like I'm breaking it (I'm not!).

I still use the GC for simple niceties like computations and searches that don't occur every frame, though even then I've started leaning more towards std.container.array and similar solutions; additionally, if something IS going to stay in memory forever (once-loaded data files, etc), why put it in the GC at all, if that's just going to increase the area that needs to be scanned when a collection finally does occur?

I'd like to experiment more with reference counting in the future, but since it's just kind of a "cool trick" in D currently involving wrapping references in structs, there are some hangups. Consider for example:
```d
import std.container.array;
struct RC(T : Object) {
        T obj;
        // insert postblit and refcounting magic here
}
class Farm {
        Array!(RC!Animal) animals;
}
class Animal {
RC!Farm myFarm; // Error: struct `test.RC(T : Object)` recursive template expansion
}
```
Logically, this can lead to leaked memory, as a Farm and Animal that both reference each other going out of scope simultaneously would never get deallocated. But, something like this ought to at least *compile* (it doesn't), and leave it up to the programmer to handle logical leak problems, or so my thinking goes at least. I also really hate having to prepend RC! or RefCounted! to *everything*, unless I wrap it all in prettier aliases.

Reply via email to