A small thought occurred to me the other morning regarding unboxed arrays.

Given that they had been planned for CLR 1.0, I was pondering why the CLR
team might have decided not to implement unboxed arrays, and why they
currently restrict unsafe fixed arrays to reference-free types. Here is
some speculation:

In the absence of unboxed arrays, the total size of an object tends to be
small. There are limits to how many fields programmers will add by hand in
their programs, so that effect is additive. The unboxed array construct is
therefore the only way to introduce a multiplicative effect (by replicating
some object a lot of times).

This has implications for type description records. The traditional
bit-vector to identify pointer words ceases to be a space-effective
encoding mechanism. You begin to want a "little language" for describing
reference locations.

A second concern involves stack barriers. Stack barriers work well when the
size of a frame (measured in object references) is relatively bounded. If a
frame can contain thousands of references, it becomes hard for a barrier to
render the scanning phase "incremental enough".

Finally, large frames impede stack-in-the-heap designs.

So for a variety of reasons, I think that our implementation needs to be
free to heap-allocate unboxed objects that would normally appear in the
stack frame when the stack frame becomes too large. This doesn't change any
semantics. The concern is that it reduces the programmer's direct
understanding of data layout. Perhaps to the degree that this should be
treated as a compile-time warning or error rather than a problem to be
fixed automatically. I'm not sure.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to