On Tue, Jul 30, 2013 at 5:06 AM, David Jeske <[email protected]> wrote:

> On Mon, Jul 29, 2013 at 12:20 AM, Jonathan S. Shapiro <[email protected]>wrote:
>
>> David, I think it's clear that you feel very strongly opposed to GC in
>> some circumstances, and that's a perfectly OK choice for you to make. I
>> don't share your view. I don't think that there will *ever* be a GC
>> solution that is completely free of tradeoffs, just as there isn't a manual
>> solution that is completely free of tradeoffs.
>>
>
> I want to explain more about my experiences and bias with respect to GC.
> However, before I do I have a smaller point that came up while writing the
> essay below. I think Shap and I substantially agree about GC and the need
> to end world-stop. Where I think our opinions may differ is in the roles
> and priorities of unmanaged and regions in surmounting GC's limitations.
>

Not so. First, I haven't proposed regions as a general solution to GC at
all. What I *have* said is that the introduction of unmanaged code is
categorically unacceptable.

We don't differ about the relative impact of regions and unmanaged storage.
We differ about the relative importance of "the cost of GC" vs. "the
national and international cost of unsafety". I'm willing to entertain *safe
*, manually-managed solutions. Indeed, I'm willing to admit safe unmanaged
solutions that might surprise you - such as a typed unmanaged heap in which
free puts the object back in a type-specific pool and outstanding
references are allowed to remain live, or one in which allocation reference
counts are used to lazily nullify outstanding pointers. These, coupled with
borrowed pointers, could go a very long way.

What I'm categorically *not* willing to entertain is unsafe solutions.

And I would strongly prefer that the default model be general purpose. Make
easy things easy, and hard things hard.


> However, when we discuss managed viability for systems programming we have
> to accept that whether we can achieve 25% CPU and memory overhead with
> tracing GC is *workload* *dependent*.
>

Absolutely. And I see no reason for most systems programs why that overhead
should be that high. Also, you persistently neglect consideration of the
overheads of malloc/free in your discussions. GC may still lose the
performance debate, but if we're going to argue it on those grounds, let us
do so honestly.


> For some applications, GC is notably faster than manual collection.
>
>
> "faster" is too abstract a term. We need to make claims regarding either
> CPU-Efficiency or Responsiveness.
>

Very well. For many applications, the total CPU overhead of GC is
dramatically *less* than that of memory storage, while running within
acceptable real-memory bounds and impeding responsiveness no more than
other sources of preemptive system activity.


> GC (with world-stop) has notably worse responsiveness than
> manual-collection. This is because the timing of GC can not be controlled
> while the timing of manual collection can be precisely controlled.
>

Because it is inadequately qualified, this statement is false.


> GC (with no world-stop) should have similar or better responsiveness than
> manual-collection, with an amount of CPU-and-Memory Overhead which is
> related to tenured heap-churn rate.
>

The second part of this depends on the collector technology. There are
dependencies on both the tenured heap churn rate, the general allocation
rate, and several other factors.


> If a developer can demonstrate a successful application in C that I can't
> create a user-accepted equivalent for in language-X, then I don't believe
> language-X is a good long-game replacement for C.
>

This is an *excellent* summary of where we disagree. I'm much less
interested in the performance and user perception of individual
applications, and much more interested in systemic effects. If asked "will
you use this slower program rather than that faster program", the answer is
obviously no. If asked "will you accept a requirement for $100 more memory
in order to dramatically improve your system security and stop worrying
about many classes of attacks" the answer really should be yes. In fact,
given DDoS, if the answer to the second question is "no", then the user
should be taxed as a hazard to other users. Unsafe behavior can and should
be regulated when it has consequences for other parties.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to