On 03-Aug-2000, Nigel Perry <[EMAIL PROTECTED]> wrote:
> >I understand that point, but if doing that means that you need to
> >implement the basic things like argument passing and procedure
> >calling yourself, using your own virtual machine, rather than
> >by using the underlying runtime's argument passing and procedure calling
> >mechanisms, then I'd say that it is looking more like putting a round
> >peg in a square hole than a good fit.
>
> Passing arguments on a separate stack is pretty lightweight.
Well, I'm not sure that's true. Consider what you pay:
- If you only have one stack, then the stack needs to be a
stack of Object, and so for every argument that you put on the
stack you need to
- box it before putting it on the stack
(if it is not already boxed)
- downcast it to the right type after taking it off the stack
- unbox it (if the type is an unboxed type)
(If you use more than one stack, this can reduce such costs;
but doing that increases the complexity of the approach
significantly, and there are some significant costs in terms
of performance from using multiple stacks that need to be
weighed against the potential benefits.)
- To get efficient access to your stack, you need to use at least one
extra register to hold your stack pointer, or (quite likely) two
extra registers, one for the array base and one for the offset.
Note that on x86 there are only six general purpose registers,
so you very quickly run out...
Alternatively, if these are not in registers, then the cost of
accessing the stack will be higher.
- By not using the system stack, you reduce locality.
This shows up in the greater register pressure (see above),
and can also lead to more cache misses or cache collisions.
- If the stack is an array, then (at least for verified code)
every push and pop needs a bounds check.
- Because of aliasing issues, the .NET runtime is very
unlikely to optimize away the pushes and pops to your stack
when it inlines methods. And indeed it will treat these
pushes and pops as more expensive than ordinary argument passing
(since they are!), and so it will be less inclined to inline
such methods in the first place. Of course, doing a good job
of inlining in the Haskell compiler will help reduce the
damage here, but the Haskell compiler can't do cross-language
inlining, and can't do JIT-time inlining across different
assemblies or of dynamically loaded code, so you will still
lose out in some scenarios.
However, I guess it depends on what you mean by "light-weight".
I guess one could argue that the costs of most other things pale
in comparison to the costs of having lazy evaluation as the default ;-)
--
Fergus Henderson <[EMAIL PROTECTED]> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger [EMAIL PROTECTED] | -- the last words of T. S. Garp.