On Tuesday, 4 February 2014 at 01:36:09 UTC, Adam Wilson wrote:
On Mon, 03 Feb 2014 17:04:08 -0800, Manu <turkey...@gmail.com> wrote:

On 4 February 2014 06:21, Adam Wilson <flybo...@gmail.com> wrote:

On Mon, 03 Feb 2014 12:02:29 -0800, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

On 2/3/14, 6:57 AM, Frank Bauer wrote:

Anyone asking for the addition of ARC or owning pointers to D, gets pretty much ignored. The topic is "Smart pointers instead of GC?", remember? People here seem to be more interested in diverting to
nullable, scope and GC optimization. Telling, indeed.


I thought I made it clear that GC avoidance (which includes considering
built-in reference counting) is a major focus of 2014.

Andrei


Andrei, I am sorry to report that anything other than complete removal of the GC and replacement with compiler generated ARC will be unacceptable to a certain, highly vocal, subset of D users. No arguments can be made to otherwise, regardless of validity. As far as they are concerned the discussion of ARC vs. GC is closed and decided. ARC is the only path forward to the bright and glorious future of D. ARC most efficiently solves all memory management problems ever encountered. Peer-Reviewed Research and
the Scientific Method be damned! ALL HAIL ARC!

Sadly, although written as hyperbole, I feel that the above is fairly
close to the actual position of the ARC crowd.


Don't be a dick.
I get the impression you don't actually read my posts. And I also feel like
you're a lot more dogmatic about this than you think I am.

I'm absolutely fine with GC in most applications, I really couldn't give any shits if most people want a GC. I'm not dogmatic about it, and I've
**honestly** tried to love the GC for years now.
What I'm concerned about is that I have _no option_ to use D uninhibited
when I need to not have the GC.

These are the problems:
* GC stalls for long periods time at completely un-predictable moments. * GC stalls become longer *and* more frequent as memory becomes less available, and the working pool becomes larger (what a coincidence). * Memory footprint is unknowable, what if you don't have a virtual memory
manager? What if your total memory is measured in megabytes?
* It's not possible to know when destruction of an object will happen, which has known workarounds (like in C#) but is also annoying in many
cases, and supports the prior point.

Conclusion:
GC is unfit for embedded systems. One of the most significant remaining
and compelling uses for a native systems language.

The only realistic path I am aware of is to use ARC, which IS a form of GC,
and allows a lot more flexibility in the front-end.
GC forces one very particular paradigm upon you.
ARC is a GC, but it has some complex properties __which can be addressed in
various ways__. Unlike a GC which is entirely inflexible.

You're not happy with ARC's cleaning objects up on the spot? Something that many people WANT, but I understand zero cleanup times in the running context is in other occasions a strength of GC; fine, just stick the pointer on a dead list, and free it either later during idle time, or on another thread. On the contrary, I haven't heard any proposal for a GC that would allow it to operate in carefully controlled time-slices, or strictly
during idle-time.
Cycles are a problem with ARC? True, how much effort are you willing to spend to mitigate the problem? None: run a secondary GC in the background to collect cycles (yes, there is still a GC, but it has much less work to do). Some: Disable background GC, manually require user specified weak references and stuff. Note: A user-preferred combination of the 2 could severely mitigate the workload of the background GC if it is still desired
to handle some complex situations, or user errors.
Are there any other disadvantages to ARC? I don't know of them if there are.

Is far as I can tell, an ARC collector could provide identical convenience as the existing GC for anyone that simply doesn't care. It would also seem that it could provide significantly more options and control for those that
do.

I am _yet to hear anyone present a realistic path forwards using any form of GC_, so what else do I have to go with? Until I know of any other path
forward, I'll stand behind the only one I can see.
You're just repeating "I don't care about something that a significant subset of D developers do care about, and I don't think any changes should
be made to support them".
As far as I know, a switch to ARC could be done in a way that 'regular' users don't lose anything, or even notice... why is that so offensive?

I am not trying to be a dick. But I do feel like a small number of people are trying to gang up on me for daring to point out that the solution they've proposed solution might have bigger problems for other people than they care to admit.

You still haven't dealt with the cyclic reference problem in ARC. There is absolutely no way ARC can handle that without programmer input, therefore, it is simply not possible to switch D to ARC without adding some language support to deal with cyclic-refs. Ergo, it is simply not possible to seamlessly switch D to ARC without creating all kinds of havoc as people now how memory leaks where they didn't before. In order to support ARC the D language will necessarily have to grow/change to accommodate it. Apple devs constantly have trouble with cyclic-refs to this day.

I am not against supporting ARC side-by-side with the GC (I'm actually quite for it, I would love the flexibility), but it is unrealistic to make ARC the default option in D as that would subtly break all existing D code, something that Walter has point-blank refused to do in much smaller easier to find+fix cases. You can't grep for a weak-ref. So if that is what you are asking for, then yes, it will never happen in D.

Also, I don't think you've fully considered what the perf penalty actually is for a *good* ARC implementation. I just leafed through the P-Code in the GC Handbook for their ARC implementation, it's about 4x longer than what their best P-Code Mark-Sweep implementation is.

I would also like to point out that the GC Handbook points out six scientifically confirmed problems with ARC. (See Page 59)

1. RC imposes a time overhead on mutators in order to manipulate the counter. 2. Both the counter manipulation and pointer load/store operations MUST be atomic to prevent races.
3. Naive RC turns read ops into store ops to update the count.
4. No RC can reclaim cyclic data structures, which are much more common than is typically understood. [Bacon and Rajan 2001] 5. Counter must be the same size as the pointer, which can result in significant overhead for small objects. 6. RC can still pause. When the last head to a large pointer structure is deleted, RC MUST delete each descendant node.

Note that these are paraphrases of the book, not me talking. And these apply equally to ARC and vanilla RC.

Boehm demonstrated in one of his papers (2004) that thread-safe ARC may even lead to longer maximum pause times than a standard Tracing GC.

Most of us know and understand the issues with ARC and that with a GC. Many of us have seen how they play out in systems level development. There is a good reason all serious driver and embedded development is done in C/C++.

A language is the compiler+std as one unit. If Phobos depends on the GC, D depends on the GC. If Phobos isn't systems level ready, D isn't systems level ready. I've heard arguments here that you can turn off the GC, but that equates to rewriting functions that already exists in Phobos and not using any third-party library. Why would anyone seriously consider that as an option? Embedded C++ has std:: and third-party libraries where memory is under control?

Realistically D as a systems language isn't even at the hobby stage. I understand people are working in this area, as am I. Eagerly testing and trying to contribute where possible. That's the fun part of D, watching and helping it mature into everything it can be.

Cheers,
ed

Reply via email to