--- Michael Lazzaro <[EMAIL PROTECTED]> wrote:

> Primitive types were originally intended for runtime speed, thus an
> "int" or a "bit" is as small as possible, and not a lot of weird
> runtime
> checking has to take place that would slow it down.  It can't even be
> undef, because that would take an extra bit, minimum, to store. 

This just ain't so.

I once worked on a CPU simulator, and in order to set watch values on
arbitrary memory we used a "key" value that, if present in simulated
memory, indicated that a search of the "watches table" was in order.

That key was chosen empirically, based on histogramming the ROM images
and active program state, and choosing the lowest frequency value.
Thus, the "fetch byte" primitive would automatically check and notify
whenever a 0xA9 was seen. (Sometimes it really meant 0xA9, other times
it meant "0x00, but halt execution".)

The same can be done here, if the internals folks can make the
assumption that the case is really uncommon. To wit:

For 'bit', the key value is (eenie, meenie, ...) '1'.
Any '1' value will trigger a search for undef bit values. Presuming
that bit values will not frequently be undef, the search should be
cheap and the storage requirements will be something on the order of 

C + Num_undef_bits * sizeof(addr_t)

Which will be greater than one extra bit when few or no bit objects are
used, and will be very much smaller than one extra bit when many bit
objects are used.

In short:

It's possible, even easy, to implement ANY feature (properties, undef,
etc) for primitive types in this manner. It absolutely *IS* correct to
say "That's an implementation detail" and leave it to the internals
team to figure out HOW they want to do it.

So what's the difference REALLY?

=Austin





__________________________________________________
Do you Yahoo!?
U2 on LAUNCH - Exclusive greatest hits videos
http://launch.yahoo.com/u2

Reply via email to