On Sat, Jul 27, 2013 at 11:11 AM, Ian P. Cooke <[email protected]> wrote:

>
> On Jul 27, 2013, at 12:29 , "Jonathan S. Shapiro" <[email protected]>
> wrote:
>
> I'm sure that I'm leaving some things out, and I don't mean to be unfair
> by doing so. Heres how *I* would state the requirements for a systems
> language:
>
>    1. Is a *great* general-purpose language that
>    2. Supports prescriptive stack allocation
>    3. In which run-time expense of computing statements and expressions
>    is understandable to the experienced programmer
>    4. In which that run-time cost understanding is reasonably balanced
>    against compilation techniques like inlining and template expansion - C++
>    fails this test, not because it *has* these features, but because of
>    the way in which these idioms are commonly used.
>    5. In which certain types of safe, prescriptive storage management are
>    *possible* when algorithms are written by expert programmers.
>
> I agree with these statements although I would also say something about
> how things map down to the underlying hardware/architecture.
>

I think that's what I was attempting to capture in [3], though that's a
very subjective metric, and your additions are very useful.


>  You have more concerns as a systems programmer than other kinds of
> programmers:
>
> * you want appropriately sized data types that directly map to hardware
> registers
>

Concur.


> * you may care what registers are used and what the contents are
>

I tend to believe that if you care about this you should be using assembly
language. Can you give a compelling example in which this should be true
for high-level language code?

With the sole exception of writing *really* low-level runtime code (like a
GC implementation), I confess I've never seen a compelling case for this in
modern compilers. The compiler is *that* much better at register allocation
than you are.


> * you care about how data structures are laid out in memory:
>

Yes.


> you need to access specific addresses (memory-mapped devices),
>

Potentially for kernel or driver code.


> you need to know if its packed or not, aligned or not. (e.g. you wouldn't
> find defrepr anywhere but in a systems programming language).
>

Agreed.


> * you care about the memory hierarchy and how long your cache lines are.
>  you have explicit ways of avoiding false sharing.
>

I agree that this can be very important, but I think it's beyond the scope
of what a high-level language should try to solve in detail. I tend to
believe that given known layout and sizing, the rest of this should remain
in the hands of the programmer. I'm also concerned that this is a place
where adding controls in the language may make support on certain platforms
where the concepts don't exist (e.g. CLR) difficult or impossible.

But this *may* be because I haven't thought about it adequately, and I've
been able to solve my particular requirements in the past with only this
much to go on. I can see some cases where a keyword like "cachealign" might
be useful, for example.

Is there anything specific that you would identify here as desirable?


> * you care about swapping, usually to the point that if your application
> is swapped out, horrible things happen
>

No. I agree that swapping is important, but it occurs at a completely
different level of abstraction than a systems programming language can
reasonably address. I think the closest you can get to this in practice is
to say that you *sometimes* require the ability to tightly bound the heap
footprint of your program.

If this is a general requirement for *all* programs, you have bigger
problems than a programming language and runtime can solve for you. IOS
comes to mind as an example of a design that suffers bad problems in this
regard.


> * you care about how language-level threads are mapped to cores and how
> they are scheduled (priorities, context-switches, etc)
>

That's important, but it's not a language-level issue. The language needs
to provide adequate support for concurrency, and the environment library
needs to get that mapped to OS threads and cores, but the latter isn't
directly a language design issue.

I do agree that there are ways to booger the language design so as to make
this impossible, and those are good things to avoid.


> I think that it's your number 3 that causes people to avoid garbage
> collection (at least it does to me).  Even if you understand the cost of
> the kinds of collection you usually don't know when it's going to happen.
>

I've been hearing that argument for 35 years now. During those 35 years,
I've heard only two or three design pattern scenarios in which (assuming a
properly implemented GC) the programmer had any business caring about this.


> Usually there are critical sections where you must not be preempted by
> either the garbage collection runtime or the kernel.
>

If true, this is an important issue, but I'm not convinced that it's true.
I understand how critical sections work, but in all of the use-cases I've
seen non-preemption by the kernel isn't the right answer.

Non-preemption by GC is relevant when you have a hard time bound. And even
there, strictly speaking, a sufficiently small preemption is OK. The place
where this really becomes a problem is when GC can be triggered by a
second-party thread that *isn't* in a critical section, and can therefore
impact the behavior of the thread in the critical section. This kind of
thing can't really be solved without more sophisticated types of heap
isolation and garbage collection than are currently widespread.

I do have some approaches to dealing with this running around in my head.
Talking about real-world use cases would be very helpful here.


> If there's a way to disable the GC for the duration of those sections
> (possible the lifetime of the application) then great.  Or, I suppose, if
> you have the hardware resources you can dedicate one or more cores for GC
> and say, 'run over there and never take actions that will interrupt my
> application threads running on these other cores' , that would be
> acceptable.
>
> Is that what you meant by number 5?  Assuming the presence of an expert
> programmer can they write a complex application that allocates all the
> memory it needs on start up and then run with a guarantee that the GC will
> not cost a single cycle?  I think it falls under that 'pay for what you
> use' philosophy.  If I don't need a collector then any time spent running
> one, even if it doesn't interrupt my program, is time wasted that could be
> used by other processes.
>

I'm certainly thinking about mechanisms that could achieve what you are
describing in suitably careful codes, and that would make *checking* the
careful codes for compliance possible. We coded both EROS and Coyotos this
way, for example, and it's well within what is feasible and reasonable to
type the fact that the program does not allocate after startup.

Though I'd personally be willing to accept an approach that required a
compacting GC to be run before the non-allocating "main loop" began.

The "pay for what you use" philosophy is problematic. While I understand
the appeal, in practice it too often turns into "everyone pays for your
ability to *not* use a few things".


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to