--- Michael Lazzaro <[EMAIL PROTECTED]> wrote:
>
> OK, I think we agree that 'default' refers to what to put in the
> 'holes' of an array (or hash, but that's a separate discussion.)
> When
> you overlay a real hash on top of your default values, the default
> values "show through the holes". So now we just have to define what
> "holes" are.
>
> An assertion: The 'is default' property overrides the default
> 'empty'
> value of the given underlying type. For a normal array of scalars,
> that 'empty' value is C<undef>. But some scalar types can be set to
> undef, and some can't:
>
> my @a; # 'empty' value of a scalar is undef
> my int @a_int; # 'empty' value of an 'int' is 0
> my str @a_str; # 'empty' value of a 'str' is ''
I understand these, and they seem to make sense.
> my Int @a_Int; # 'empty' value of an 'Int' is undef
> (right?)
> my Str @a_Str; # 'empty' value of a 'Str' is undef
> (right?)
It's going to be "undef but 0" or "undef but ''", but since type
promotion will handle that automatically, yes.
> So C<is default <def>> is defining the value to use as the 'empty
> value' of the _underlying cell type_.
And therefore, if you try to specify an invalid <dev> for the
_underlying cell type_, you should get an error, either compile time or
run-time, as soon as it gets noticed.
my int @a is default "foo"; # Compile time error.
my int @a is default $param1; # Run time error if $param1 is bogus.
> There are two credible choices, AFAICT:
>
> Solution 1: If you attempt to SET a cell to it's 'empty value', it
> will be set to it's default:
>
> my int @a is default(5);
> @a[5] = 0; # actually sets it to it's 'empty value', 5
> @a[5] = undef;# autocnv to 0, + warning, still sets to 5
>
> my Int @a is default(5); # NOTE difference in type!
> @a[5] = 0; # THIS really does set it to 0
> @a[5] = undef;# and this sets it to 5
>
> So you can't set something to its type's own empty value, because it
> will, by definition, thereafter return it's "overloaded" empty value,
> <def>.
I believe that I completely understand what you are saying. I am sure
that I absolutely disagree with what I believe I understand you to be
saying.
my $answer is (";-)" but true);
[[ This is quoted out of order, but addresses 1, above. --agh ]]
> In spite of the perhaps surprising nature of solution 1, I think it
> is probably the more correct solution, if you really insist on
> putting a default value on a primitive-typed array. As it
> points out, you can still get both behaviors, simply by choosing
> int vs. Int, str vs. Str, etc.
> Solution 2: _ANY_ other solution would require the introduction of
> 'fake empty' and 'really empty', and require arrays to keep track of
> the difference.
>
> my Int @a is default(5);
>
> @a[3] = 3; # there are now 4 items in the array
> @a[2]; # was autoset to undef, so returns 5
> @a[4]; # doesn't exist, so returns 5
>
> @a[2] = undef; # well, it's still undef, but now mark it
> # as a 'real' undef, so don't return 5.
>
> This is essentially adding another layer of defined-ness on each
> cell, and therefore requires an additional flag be kept & checked
> for each array element. While this is certainly a possibility, I
> worry about the complexity it introduces.
You're confusing specification with implementation, again.
I don't propose to require "really empty" and "fake empty". What I
propose is that:
1- If a range of values gets "strongly implied", like @a[2] in your
example, then they should be populated with the default value. After
all, it's the default.
2- If a range of values are "waekly implied", as in the examples
provided by Leopold Tesch:
@a[12345678] = 1;
@a[12000];
The "interior" values haven't actually been created because the array
is in "sparse" mode.
Any read from them will return @a.default, in your example 5.
3- Any action which serves to destroy, delete, defenestrate, or
absquatulate (I love that word!) the value will cause the value to
return the default value once again, whether it's
allocated-but-destroyed (@a[2]) or unallocated (@a[12000]).
4- Any explicit action by the programmer is taken as gospel, or as
close to it as possible via promotion:
my int @a is default(5);
@a[2] = undef; # Warning: 'undef' used where primitive 'int' expected.
@a[2]; # 0, because int(undef) is 0.
delete @a[2];
@a[2]; # 5, because deleting restores the default.
my Int @a is default(5); # NOTE: type change.
@a[2] = undef; # undef, because according to you this is okay.
# (I'm not being sarcastic -- you're the
# "edge cases" guy.)
@a[2]; # undef, because that's what the programmer told me.
delete @a[2];
@a[2]; # 5, because it's the default, as above.
Now: Does this require a "fake undef" and a "real undef"?
WHO CARES?
That's p6-internals. I think that it could be coded either way. I also
think that as a performance boost, it may be valid to want to just
store 0's in the uninit sections until they get accessed, so the p6int
guys MAY want to do fake/real undef. But that's not the charter of
this list.
=Austin