> I am really astonished to hear protection keys being thought
> of as "brittle" under transformation: that is just the opposite of what they 
> are about!

Sorry to astonish you. :)

> Executive summary:
> 
>   - de Bruijn indices are a good assembly language of
>       binding constructs, suitable for automatic transformation,
>       but not suitable for human use
> 
>   - the classical name-based scheme is suitable for human
>       use, in principle, but is brittle under transformation        (many 
> transformations can not be applied without        renaming); the more 
> important these transformations
>       become, the less suitable this scheme is, for either
>       humans or machines (because of excessive renaming)
> 
>   - Berkling's protection keys add flexibility to name-based        schemes, 
> *removing* their brittleness under transformation;
>       it combines suitability for automated transformation
>       with readability for humans (in fact, both de Bruijn's        and the 
> classical name-based scheme are special cases        of Berkling's)

Well, you're just asserting that Berkling's keys remove their brittleness 
without evidence. And I'm afraid I've seen enough evidence in my experience to 
make me doubt your claim.

Just take a look at what happens to the definition of alpha-conversion in a 
language with Berkling's keys. Traditionally, when you alpha-convert a binding 
x, you can stop the induction whenever you hit an inner binding (shadowing) of 
x. Not so with your proposal: now you have to keep going -- and decrement an 
accumulator argument to the algorithm (not exactly obvious!) -- to look for 
"outer" references within the inner binding.

What does this mean in practice? It means that someone refactoring at an outer 
level may be unaware that they've affected the index of an inner reference by 
changing a binding. It means you lose some of the locality that you get with 
traditional lexical scope.

>> The bigger issue is that scoping mechanisms in the de
>> Bruijn tradition are brittle for programming. They make
>> sense as intermediate representations or notations for
>> proof frameworks, because they can (sometimes) be
>> easier to reason about formally, but they're fragile in
>> the face of refactoring.
> 
> Sorry, but you have two directly conflicting statements
> in one sentence there (unless I misunderstand what you
> mean be "fragile"?): the point of de Bruijn choosing that
> representation was to make automatic transformation
> of scoped terms possible/easy, without breaking proofs.
> That is the opposite of "fragile" to me.

No contradiction. A machine is much better at consistently shifting big 
collections of integer indices without making a mistake. As you acknowledged, 
de Bruijn notation is not fit for humans.

> The problem is wider than 'this', 'this' just seems to be the main use case 
> where the problem surfaces in Javascript
> practice at the moment.

And the reason for this is that |this| is implicitly bound. Which, as I said in 
my last email, is a good motivation for considering making it possible 
explicitly bind |this|. IOW, there are other solutions than the one you 
describe that address the problem. And I believe your solution is worse than 
the problem.

> With scopes and prototypes, Javascript has two separate binding chains, but 
> with prototypes, we can use different prefixes, "(prototype.)*.name", to skip 
> entries in that chain.

Prototype chains and lexical environments are two totally different semantic 
constructs -- only one is used for scope. They shouldn't be confused.

> Btw, have you ever wondered whether 'var'-bindings are
> recursive?

This is a really basic question about JS, and not appropriate for this list.

>> Seriously, the problem you're trying to solve is that |this|
>> is too fragile in the face of refactoring, but you're solving
>> it with a mechanism that's just as sensitive to refactoring.
> 
> It seems I am still not communicating Berkling's ideas
> successfully (sorry about that):

No, I understand the semantics you describe. I think you just haven't 
understood what I'm saying. I'm not talking about automatic rewriting. All of 
these representations of binding structure (names, de Bruijn indices, the 
"locally nameless" approach, the Berkling representation you describe) are 
different ways of representing graphs as trees. For the sake of formal systems, 
they're all equally expressive. The problem is that some of them are more 
susceptible to human error than others.

If you spend time working with tree manipulation languages like XPath, you find 
that being able to refer to "parent" links makes your program sensitive to 
fine-grained changes in the tree structure, which means that refactoring is 
more likely to end up making mistakes.

> If protection
> keys were at all "brittle" in the face of refactoring, none of
> his systems would ever have worked.

Nonsense-- there are all sorts of technologies that work and are brittle. Ask 
me over a beer some time, and I'll name a few. ;)

> Quite contrary to being "brittle", protection keys introduce
> the necessary flexibility that allows these systems to *avoid
> breaking* the binding structure while automatically rewriting
> the code.

I think the key to the misunderstanding here is the word "automatically." Of 
course you can automatically refactor just about any representation of variable 
binding. The question with a given representation is whether it's sensitive to 
small changes introduced by ordinary human refactoring (i.e., the everyday 
experience of programming) and therefore more error-prone.

Dave

_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to