On Friday, 12 October 2018 at 23:24:56 UTC, Stanislav Blinov wrote:
On Friday, 12 October 2018 at 21:34:35 UTC, Atila Neves wrote:

-------------------------------
When writing a throwaway script...

...there's absolutely no need for a GC.

True. There's also absolutely no need for computer languages either, machine code is sufficient.

Funny. Now for real, in a throwaway script, what is there to gain from a GC? Allocate away and forget about it.

In case you run out of memory, the GC scans. That's the gain.

In fact, the GC runtime will only detract from performance.

Demonstrably untrue. It puzzles me why this myth persists.

Myth, is it now?

Yes.

Unless all you do is allocate memory, which isn't any kind of useful application, pretty much on each sweep run the GC's metadata is *cold*.

*If* the GC scans.

There are trade-offs, and one should pick whatever is best for the situation at hand.

Exactly. Which is *not at all* what the OP is encouraging to do.

I disagree. What I got from the OP was that for most code, the GC helps. I agree with that sentiment.

Alright, from one non-native English speaker to another, well done, I salute you.

The only way I'd qualify as a non-native English speaker would be to pedantically assert that I can't be due to not having learned it first. In any case, I'd never make fun of somebody's English if they're non-native, and that's most definitely not what I was trying to do here - I assume the words "simple" and "easy" exist in most languages. I was arguing about semantics.

To the point: *that* is a myth. The bugs you're referring to are not *solved* by the GC, they're swept under a rug.

Not in my experience. They've literally disappeared from the code I write.

Because the bugs themselves are in the heads, stemming from that proverbial programmer laziness. It's like everyone is Scarlett O'Hara with a keyboard.

IMHO, lazy programmers are good programmers.

For most applications, you *do* know how much memory you'll need, either exactly or an estimation.

I don't, maybe you do. I don't even care unless I have to. See my comment above about being lazy.

Well, I guess either of those do take more arguments than a "new", so yup, you do indeed write "less" code. Only that you have no clue how much more code is hiding behind that "new",

I have a clue. I could even look at the druntime code if I really cared. But I don't.

how many indirections, DLL calls, syscalls with libc's wonderful poison that is errno... You don't want to think about that.

That's right, I don't.

Then two people start using your script. Then ten, a hundred, a thousand. Then it becomes a part of an OS distribution. And no one wants to "think about that".

Meh. There are so many executables that are part of distributions that are written in Python, Ruby or JavaScript.

For me, the power of tracing GC is that I don't need to think about ownership, lifetimes, or manual memory management.

Yes you do, don't delude yourself.

No, I don't. I used to in C++, and now I don't.

Pretty much the only way you don't is if you're writing purely functional code.

I write pure functional code by default. I only use side-effects when I have to and I isolate the code that does.

But we're talking about D here.
Reassigned a reference? You thought about that. If you didn't, you just wrote a nasty bug. How much more hypocrisy can we reach here?

I probably didn't write a nasty bug if the pointer that was reassigned was to GC allocated memory. It lives as long as it has to, I don't think about it.

"Fun" fact: it's not @safe to "new" anything in D if your program uses any classes. Thing is, it does unconditionally thanks to DRuntime.

I hardly ever use classes in D, but I'd like to know more about why it's not @safe.

Yes, there are other resources to manage. RAII nearly always manages that, I don't need to think about that either.

Yes you do. You do need to write those destructors or scoped finalizers, don't you? Or so help me use a third-party library that implements those? There's fundamentally *no* difference from memory management here. None, zero, zip.

I write a destructor once, then I never think about it again. It's a lot different from worrying about closing resources all the time. I don't write `scope(exit)` unless it's only once as well, otherwise I wrap the code in an RAII struct.

Why is Socket a class, blown up from a puny 32-bit value to a bloated who-knows-how-many-bytes monstrosity? Will that socket close if you rely on the GC? Yes? No? Maybe? Why?

I don't know. I don't think Socket should even have been a class. I assume it was written in the D1 days.

Can I deploy the compiler on a remote machine with limited RAM and expect it to always successfully build my projects and not run out of memory?

If the compiler had the GC turned on, yes. That's not a point about GC, it's a point about dmd.


Reply via email to