On Thursday, 27 April 2017 at 19:36:44 UTC, Ben wrote:
On Thursday, 27 April 2017 at 16:35:57 UTC, Moritz Maxeiner wrote:
You'll have to be more specific about what issue you're referring to. People not liking garbage collection? In any case, AFAIU DIP1000 was about more mechanically verifiable memory safety features when not using the GC.


Is it possible to run D without the GC AND the standard library?

It is possible to run a D program without the GC only if you don't allocate using the GC. If you want to see D code that does allocate using the GC, the compiler flag `-vgc` is your friend (compile Phobos with `-vgc` for kicks). Currently, not all of Phobos is free of GC allocations, most notably exceptions (refer to [1] and similar topics)

[1] http://forum.dlang.org/thread/occ9kk$24va$1...@digitalmars.com

Frankly seeing in this example that the GC was in theory able to kick in 6 times in a simple 100 item loop, that is not efficient. I if did my own memory management, the variable cleanup will have been done in one go, right after the loop. Simply more efficient.

You replied to the wrong person here, seeing as I did not link to the article you're referring to, but your statement about not being efficient is, honestly, ludicrous, so I'll reply: Expanding the continuous memory region a dynamic array consists of (possibly moving it) once it overflows has absolutely nothing to do with the GC, or even the language, it's how the abstract data type "dynamic array" is defined. D just does this transparently for you by default. If you already know the exact or maximum size, you can allocate *once* (not 6 times) using `new` and `.reserve` respectively *before* entering the loop, like that article explains in depth.


Been thinking about this topic. Dlang has a destructor, does that means if you allocate on the GC and then do your own destructor, it technically counts as manual memory management?

Um, what? Memory (de)allocation (in C often malloc/free) and object (de)contruction (in C usually functions with some naming conventions like `type_init`/`type_deinit`) are on two entirely different layers! Granted, they are often combined in C to functions with names like `type_new`/`type_free`, but they are conceptually two distinct things. Just to be very clear, here is a primitive diagram of how things work:
make object O of type T:
<OS> --(allocate N bytes)--> [memory chunk M] --(call constructor T(args) on M)--> [O]
dispose of O:
[O] --(call destructor ~T() on O)--> [memory chunk M] --(deallocate M)--> <OS>

D's garbage collector conceptually changes this to:
make object O of type T:
<OS> --(GC allocates)--> [GC memory pool] --(allocate N bytes)--> [memory chunk M] --(call constructor T(args) on M)--> [O]
dispose of O:
[O] --(call destructor ~T() on O)--> [memory chunk M] --(deallocate M)--> [GC memory pool] --(GC deallocates)--> <OS> with the specific restriction that you have *no* control over 'GC deallocates' and only indirect control over 'GC allocates' (by allocating more memory from the GC than is available its the pool).

Working on the memory chunk layer is memory management.
Working on the object layer is object lifetime management.
D offers you both automatic memory management and automatic lifetime management via its GC. What you describe is manual object lifetime management (which is what std.conv.emplace and object.destroy exist for) and has no effect on the automatic memory management the GC performs. You *can* do manual memory management *on top* of the GC's memory pool (using core.memory.GC.{alloc/free) or the newer, generic Alloactor interface via std.experimental.allocator.gc_allocator.GCAllocator.{allocate/deallocate}), but these operations will (generally) not yield any observable difference from the OS's perspective.


That is assuming the GC removes the memory reference when you call it. I remember seeing in some other languages ( C# possibly? ) that referring a variable to be freed only meant the GC freed the memory when it felt like it, not the exact spot when you told it.

Again, you seem to mix object lifetime management and memory management. You cannot tell the GC to free memory back to the operating system (which is what the free syscall does and what you seem to be describing). You can *only* free memory you allocated *from* the GC *back* to the GC. The GC decides when (and if) any memory is ever being freed back to the OS (which is kinda one major point of having a GC in the first place).


I personally think that people simple have a bad taste with GC because they kick in too much outside there control.

In my experience most people's aversion to GCs can be aptly described by the German proverb "Was der Bauer nicht kennt, das frisst er nicht" (the meaning of which being that most people generally like living in comfort zones and rarely - if ever - dare to leave them of their own free will; that includes developers, sadly).

For 90% the default behavior is good but its those 10% that leaves a bad taste with people ( GC on critical moments and hurting performance in return ).

Then don't allocate using the GC in such critical pieces of code? If you want automatic verification, annotate a function with @nogc (also works with anonymous functions, lambdas, etc); just be aware that not everything in druntime and phobos that could (and should) be annotated currently is (if you encounter any such functions, open a bug report for it so it can be fixed).

Reply via email to