On Friday, 12 October 2018 at 21:34:35 UTC, Atila Neves wrote:

-------------------------------
When writing a throwaway script...

...there's absolutely no need for a GC.

True. There's also absolutely no need for computer languages either, machine code is sufficient.

Funny. Now for real, in a throwaway script, what is there to gain from a GC? Allocate away and forget about it.

In fact, the GC runtime will only detract from performance.

Demonstrably untrue. It puzzles me why this myth persists.

Myth, is it now? Unless all you do is allocate memory, which isn't any kind of useful application, pretty much on each sweep run the GC's metadata is *cold*. What's worse, you don't control how much data there is and where it is. Need I say more? If you disagree, please do the demonstration then.

There are trade-offs, and one should pick whatever is best for the situation at hand.

Exactly. Which is *not at all* what the OP is encouraging to do.

What this means is that whenever I have disregarded a block of information, say removed an index from an array, then that memory is automatically cleared and freed back up on the next sweep. While the process of collection and actually checking

Which is just as easily achieved with just one additional line of code: free the memory.

*Simply* achieved, not *easily*. Decades of bugs has shown emphatically that it's not easy.

Alright, from one non-native English speaker to another, well done, I salute you. I also used the term "dangling pointer" previously, where I should've used "non-null". Strange you didn't catch that. To the point: *that* is a myth. The bugs you're referring to are not *solved* by the GC, they're swept under a rug. Because the bugs themselves are in the heads, stemming from that proverbial programmer laziness. It's like everyone is Scarlett O'Hara with a keyboard.

For most applications, you *do* know how much memory you'll need, either exactly or an estimation. Garbage collection is useful for cases when you don't, or can't estimate, and even then a limited subset of that.

Don't be a computer. Do more with GC.

Writing a throwaway script there's nothing stopping you from using mmap or VirtualAlloc.

There is: writing less code to achieve the same result.

Well, I guess either of those do take more arguments than a "new", so yup, you do indeed write "less" code. Only that you have no clue how much more code is hiding behind that "new", how many indirections, DLL calls, syscalls with libc's wonderful poison that is errno... You don't want to think about that. Then two people start using your script. Then ten, a hundred, a thousand. Then it becomes a part of an OS distribution. And no one wants to "think about that".

The "power" of GC is in the language support for non-trivial types, such as strings and associative arrays. Plain old arrays don't benefit from it in the slightest.

For me, the power of tracing GC is that I don't need to think about ownership, lifetimes, or manual memory management.

Yes you do, don't delude yourself. Pretty much the only way you don't is if you're writing purely functional code. But we're talking about D here. Reassigned a reference? You thought about that. If you didn't, you just wrote a nasty bug. How much more hypocrisy can we reach here?

"Fun" fact: it's not @safe to "new" anything in D if your program uses any classes. Thing is, it does unconditionally thanks to DRuntime.

I also don't have to please the borrow checker gods.

Yeah, that's another extremum. I guess "rustacians" or whatever the hell they call themselves are pushing that one, don't they? "Let's not go for a GC, let's straight up cut out whole paradigms for safety's sake..."

Yes, there are other resources to manage. RAII nearly always manages that, I don't need to think about that either.

Yes you do. You do need to write those destructors or scoped finalizers, don't you? Or so help me use a third-party library that implements those? There's fundamentally *no* difference from memory management here. None, zero, zip.

Sad thing is, you're not alone. Look at all the major OSs today. How long does it take to, I don't know, open a project in the Visual Studio on Windows? Or do a search in a huge file opened in 'less' on Unix? On an octacore 4GHz machine with 32Gb 3GHz memory? Should just instantly pop up on the screen, shouldn't it? Why doesn't it then? Because most programmers think the way you do: "oh it doesn't matter here, I don't need to think about that". And then proceed to advocate those "awesome" laid-back solutions that oh so help them save so much time coding. Of course they do, at everyone else's expense. Decades later, we're now trying to solve problems that shouldn't have existed in the first place. You'd think that joke was just that, a joke...

But let's get back to D. Look at Phobos. Why does stdout.writefln need to allocate? How many times does it copy it's arguments? Why can't it take non-copyable arguments? Does it change global state in your process' address space? Does it impose external dependencies? You don't want to think about that? The author likely didn't either. And yet everybody is encouraged to use that: it's out of the box after all... Why is Socket a class, blown up from a puny 32-bit value to a bloated who-knows-how-many-bytes monstrosity? Will that socket close if you rely on the GC? Yes? No? Maybe? Why? Can I deploy the compiler on a remote machine with limited RAM and expect it to always successfully build my projects and not run out of memory?

I can go on and on, but I hope I finally made my point somewhat clearer. Just in case, a TLDR: *understand your machine and your tools and use them accordingly*. There are no silver bullets for anything, and that includes the GC. If you go on advocating it because it helped you write a 1kLOC one-time-use script, it's very likely I don't want to use anything you write.

Reply via email to