[Let's break this up.  Due to lack of energy I'll probably take a long time 
getting back on all these points, if at all.]

>> = me;
> = John Cowan.

>> Nowadays it is perfectly feasible to build a cell layer underneath 
>> the language, lambda's compiling into cell-layer code that happens to 
>> assign mutable cells to all variables. 
> 
> Indeed, some compilers do just that under the name of assignment conversion. 
> But other Scheme systems do *not*, and exactly because no underlying level 
> is exposed, they are free to do so.

Did you mean to imply this is a reason against adopting the change?  Because I 
don't agree it ought to be, except for trivial changes.  Some versions of 
Scheme don't provide full continuations.  That's fine, but it just means they 
don't adhere to the standard.  I don't see the standard as defining the meaning 
of "Scheme" as a programming language.

>> A fraction more compilation time, the same resulting code, but the 
>> option to code directly in the cell layer and get more efficient code 
>> with less possibility of error. 
> 
> Or less efficient code, depending on the implementation. Precisely 
> because lexical variables are not first-class, it is possible to reason 
> about them in ways that are not possible for first-class objects, 
> and to potentially optimize their implementation. 

But "traditional code" will never use the ref cell itself, only its contents.  
Only if the cell itself is used reasoning goes down the drain, so if one 
doesn't use the features there is no penalty.  Let's explore the limit..

Suppose (de)referencing were done purely syntactically, say with a ! sign.  So 
if Lambda is the underlying layer version of lambda (and ignoring the status of 
lambda/Lambda itself):
(lambda (c) (cons (cdr c) (car c))) <--> (Lambda (!c) (!cons (!cdr !c) (!car 
!c)))
In that case, the compiler would know immediately if a ! were missing, and if 
there weren't, traditional reasoning would be valid.  On top of that, for all 
variables with the ! missing at the binding much stronger reasoning would be 
possible, because they would be immutable.  Code with the ! only missing at 
some use would be more dangerous, but such code cannot even be written now, so 
no existing code would bear a penalty.

Now maybe people want procedures for dereferencing.  Having global procedures 
for that means the compiler cannot in general know whether one is being applied 
- that would make things more difficult, and lead to a run-time penalty for all 
code, which presumably is considered too high a price.

So the limit will be somewhere between those boundaries.  Maybe the variable 
itself could be bound to a fresh procedure that would manage just that cell - 
either get or set its value?  An approach like that might retain reasonability 
while still having procedures.  If the procedure itself were never assigned, 
bound or returned the compiler would be able to do its old tricks (and of 
course compile the procedure away completely), and if it did, that would free 
the demon in a compiler-aware way.
I am not saying that is the solution - in fact I don't like the resulting 
low-level code -, just showing there are options between the boundaries 
established.

So it is possible to have a Clinger-better (cf. Pareto-better) and strictly 
more efficient and strictly more powerful (in a non-Turing sense) language.

>> The upper case ones are cleaner, requiring a smaller spec 
>> (e.g. set! becomes a procedure, set-car! and its ilk are no longer 
>> needed, (set! (car ..)) sufficing), so this is clearly a language 
>> clean-up that doesn't break existing programs but removes a restriction. 
>> Fewer concepts + fewer restrictions --> Clinger-proof. Clinger-proof + 
>> leading to faster code --> a Good Thing. 
> 
>Not unambiguously, and this has been my underlying theme throughout much 
>of this posting. 
> 
>> But compilers aren't. The presence of locations doesn't preclude any 
>> of the existing approaches, but it allows a low-level approach by the 
>> compiler. After all, it is the compiler that will write most parallel 
>> code, not the programmer. Wherever (in-)variables are immutable, 
>> the compiler can parallise to its CPU's content, but when locations 
>> pop up that is a warning sign. Whenever it can prove that no clash 
>> occurs, it can still write parallel code, and where it cannot prove it 
>> it can impose a locking mechanism (with priority to avoid deadlock: 
>> in case of doubt, evaluate left-to-right or whatever). Whenever the 
>> programmer knows better, she can use higher-level constructs and 
>> relieve the compiler of its responsibilities. 
> 
>While this argument is sound, it seems to be that it argues against your 
>case. Detecting which variables are modified is trivial by comparison 
>with the analysis required to see which uses of objects are safe and 
>which not.

My argument is that for traditional code nothing is lost - what could be 
deduced can still be deduced - and that having immutable values is a win - more 
can be deduced there.  People who start juggling ref cells (which currently 
isn't possible at all) may suffer a performance penalty, but that's the price 
for using new powers.

I am definitely not claiming I solved the deadlock problem, only claiming that 
by introducing ref cells it never gets worse, but sometimes gets simpler.  I 
don't understand what you are pointing at here, I am afraid.

Biep.


_______________________________________________
Scheme-reports mailing list
[email protected]
http://lists.scheme-reports.org/cgi-bin/mailman/listinfo/scheme-reports

Reply via email to